Will a US Department of Energy high performance computing cluster be used to train a foundation model of more than 500B or more by January 1st 2025?
7
100Ṁ894resolved Feb 16
Resolved
NO1H
6H
1D
1W
1M
ALL
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
🏅 Top traders
| # | Name | Total profit |
|---|---|---|
| 1 | Ṁ62 | |
| 2 | Ṁ13 | |
| 3 | Ṁ9 | |
| 4 | Ṁ4 | |
| 5 | Ṁ0 |
Sort by:
Inactive creator; extended the close date for continued trading. This seems easy enough to resolve as moderators if / when required.
@RossGruetzemacher If this is a mistake I apologize; we can re-close it.
People are also trading
Related questions
1GW AI training run before 2027?
75% chance
Will there be an announcement of a model with a training compute of over 1e30 FLOPs by the end of 2025?
3% chance
Will the largest machine learning training run (in FLOP) as of the end of 2025 be in the United States?
86% chance
Will we see a public GPU compute sharing pool for LLM model training or inference before 2026 ?
82% chance
Will a lab train a >=1e26 FLOP state space model before the end of 2025?
15% chance
Will an AI model use more than 1e28 FLOPS in training before 2026?
8% chance
Will the US implement a policy for rapid shutdown of large compute clusters and training runs by 2028?
33% chance
Will a model be trained using at least as much compute as GPT-3 using AMD GPUs before Jan 1 2026?
80% chance
Will a machine learning training run exceed 10^26 FLOP in China before 2026?
52% chance
Will the largest machine learning training run (in FLOP) as of the end of 2035 be in the United States?
69% chance