Will a US Department of Energy high performance computing cluster be used to train a foundation model of more than 500B or more by January 1st 2025?
7
100Ṁ894resolved Feb 16
Resolved
NO1D
1W
1M
ALL
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
🏅 Top traders
# | Name | Total profit |
---|---|---|
1 | Ṁ62 | |
2 | Ṁ13 | |
3 | Ṁ9 | |
4 | Ṁ4 | |
5 | Ṁ0 |
Sort by:
Inactive creator; extended the close date for continued trading. This seems easy enough to resolve as moderators if / when required.
@RossGruetzemacher If this is a mistake I apologize; we can re-close it.
Related questions
Related questions
Will there be an announcement of a model with a training compute of over 1e30 FLOPs by the end of 2025?
5% chance
Will the US implement a policy for rapid shutdown of large compute clusters and training runs by 2028?
26% chance
Will the largest machine learning training run (in FLOP) as of the end of 2025 be in the United States?
87% chance
Will we see a public GPU compute sharing pool for LLM model training or inference before 2026 ?
86% chance
Will the largest machine learning training run (in FLOP) as of the end of 2035 be in the United States?
69% chance
$1T AI training cluster before 2031?
65% chance
Will a model be trained using at least as much compute as GPT-3 using AMD GPUs before Jan 1 2026?
84% chance
Will the largest machine learning training run (in FLOP) as of the end of 2040 be in the United States?
46% chance
Will a machine learning training run exceed 10^26 FLOP in China before 2026?
52% chance
Will the largest machine learning training run (in FLOP) as of the end of 2030 be in the United States?
77% chance