Will a model be trained using at least as much compute as GPT-3 using AMD GPUs before Jan 1 2026?
37
247
Ṁ4KṀ1.3K
2025
71%
chance
1D
1W
1M
ALL
This question resolves yes if it is public knowledge that any ML model is trained using more than 3.14E+23 flops entirely using AMD GPUs (or, hypothetically, other ML accelerators produced by AMD in the future). Resolution is based on announce time; if the model is trained before but only announced later, this resolves NO.
If the trained model is substantially worse than such a model should be, then it does not count towards resolution (i.e if a LM is trained and it's only comparable in performance on standard benchmarks to a model trained with 1/10th the compute). This is mostly intended to exclude failed attempts.
Get Ṁ200 play money
Sort by:
More related questions
Related questions
Will there be a model that has a 75% win rate against the latest iteration of GPT-4 as of January 1st, 2025?
47% chance
Will an open source model beat GPT-4 in 2024?
65% chance
Will we have an open-source model better than GPT-4-Turbo before 2025?
62% chance
Will OpenAI release the source code and model weights of any of its legacy GPT-3 models before 2025?
24% chance
Will there be an LLM (as good as GPT-4) that was trained with 1/100th the energy consumed to train GPT-4, by 2026?
49% chance
By January 2026, will a language model with similar performance to GPT-4 be able to run locally on the latest iPhone?
70% chance
Will OpenAI release a model referred to as "GPT-6" before June 1st, 2026?
30% chance
How much compute will be used to train GPT-5?
Will any open-source model achieve GPT-4 level performance on MMLU through 2024?
83% chance
Will a language model comparable to GPT-4 be trained, with ~1/10th the amount of energy it took train GPT-4, by 2028?
77% chance