Will a model be trained using at least as much compute as GPT-3 using AMD GPUs before Jan 1 2026?

This question resolves yes if it is public knowledge that any ML model is trained using more than 3.14E+23 flops entirely using AMD GPUs (or, hypothetically, other ML accelerators produced by AMD in the future). Resolution is based on announce time; if the model is trained before but only announced later, this resolves NO.

If the trained model is substantially worse than such a model should be, then it does not count towards resolution (i.e if a LM is trained and it's only comparable in performance on standard benchmarks to a model trained with 1/10th the compute). This is mostly intended to exclude failed attempts.

Get แน€600 play money
Sort by:

Mi300x for sure