
Will a model be trained using at least as much compute as GPT-3 using AMD GPUs before Jan 1 2026?
46
1.3kṀ6092Dec 31
84%
chance
1H
6H
1D
1W
1M
ALL
This question resolves yes if it is public knowledge that any ML model is trained using more than 3.14E+23 flops entirely using AMD GPUs (or, hypothetically, other ML accelerators produced by AMD in the future). Resolution is based on announce time; if the model is trained before but only announced later, this resolves NO.
If the trained model is substantially worse than such a model should be, then it does not count towards resolution (i.e if a LM is trained and it's only comparable in performance on standard benchmarks to a model trained with 1/10th the compute). This is mostly intended to exclude failed attempts.
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
Sort by:
Gpt 3 isn't that big, the tinygrad work seems promising, we have all of 2025
People are also trading
Related questions
Will a GPT-4 level system be trained for <$1mm by 2030?
99% chance
Will a language model comparable to GPT-4 be trained, with ~1/10th the amount of energy it took train GPT-4, by 2028?
99% chance
Will we have an open-source model that is equivalent GPT-4 by end of 2025?
96% chance
Will a GPT-4 level system be trained for <$1mm by 2028?
99% chance
Before 2028, will anyone train a GPT-4-level model in a minute?
48% chance
Will a GPT-4 quality model be trained for under $10.000 by 2030?
86% chance
Will OpenAI release a model referred to as "GPT-6" before June 1st, 2026?
10% chance
Will a single model running on a single consumer GPU (<1.5k 2020 USD) outperform GPT-3 175B on all benchmarks in the original paper by 2025?
86% chance
Will $10,000 worth of AI hardware be able to train a GPT-3 equivalent model in under 1 hour, by EOY 2027?
16% chance
Will we see a public GPU compute sharing pool for LLM model training or inference before 2026 ?
86% chance