
This question resolves yes if it is public knowledge that any ML model is trained using more than 3.14E+23 flops entirely using AMD GPUs (or, hypothetically, other ML accelerators produced by AMD in the future). Resolution is based on announce time; if the model is trained before but only announced later, this resolves NO.
If the trained model is substantially worse than such a model should be, then it does not count towards resolution (i.e if a LM is trained and it's only comparable in performance on standard benchmarks to a model trained with 1/10th the compute). This is mostly intended to exclude failed attempts.
People are also trading
This doesn't quite qualify, but is close. This model is trained with more flop than gpt-3 on a mix of amd gpus and tpus: https://www.essential.ai/research/rnj-1
Gpt 3 isn't that big, the tinygrad work seems promising, we have all of 2025