
Will a model be trained using at least as much compute as GPT-3 using AMD GPUs before Jan 1 2026?
45
1.3kṀ6042Dec 31
84%
chance
1H
6H
1D
1W
1M
ALL
This question resolves yes if it is public knowledge that any ML model is trained using more than 3.14E+23 flops entirely using AMD GPUs (or, hypothetically, other ML accelerators produced by AMD in the future). Resolution is based on announce time; if the model is trained before but only announced later, this resolves NO.
If the trained model is substantially worse than such a model should be, then it does not count towards resolution (i.e if a LM is trained and it's only comparable in performance on standard benchmarks to a model trained with 1/10th the compute). This is mostly intended to exclude failed attempts.
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
People are also trading
Related questions
Will a single model running on a single consumer GPU (<1.5k 2020 USD) outperform GPT-3 175B on all benchmarks in the original paper by 2025?
86% chance
Will there be an LLM (as good as GPT-4) that was trained with 1/100th the energy consumed to train GPT-4, by 2026?
83% chance
Will $10,000 worth of AI hardware be able to train a GPT-3 equivalent model in under 1 hour, by EOY 2027?
16% chance
Will a language model comparable to GPT-4 be trained, with ~1/10th the amount of energy it took train GPT-4, by 2028?
92% chance
Will OpenAI release a model referred to as "GPT-6" before June 1st, 2026?
22% chance
Will we have an open-source model that is equivalent GPT-4 by end of 2025?
82% chance
Will we see a public GPU compute sharing pool for LLM model training or inference before 2026 ?
86% chance
Will a GPT-3 quality model be trained for under $10.000 by 2030?
83% chance
Will a GPT-3 quality model be trained for under $1,000 by 2030?
82% chance
Will a GPT-4 quality model be trained for under $10.000 by 2030?
78% chance