[Metaculus] Will OpenAI, DeepMind, or Anthropic announce a pause on large training runs for safety reasons, before 2026?
Plus
57
Ṁ92252026
12%
chance
1D
1W
1M
ALL
Will OpenAI, Google DeepMind, or Anthropic announce that they are pausing all training runs above a certain size for safety reasons, before 2026?
Resolves the same as the original on Metaculus.
Resolution criteria
This question resolves as Yes if, before January 1, 2026, at least one of OpenAI, Google DeepMind, or Anthropic announces that they are pausing all training runs above a certain size, citing safety reasons. The length of the pause does not matter.
The question resolves as No otherwise.
Fine print and additional background information can be found on Metaculus.
Once the original resolves, its resolution will be applied to this market automatically. Trustworthy-ish users are encouraged to resolve this market before then if the outcome is known and unambiguous. Feel free to ping @jskf to request early resolution or to report issues.
This question is managed and resolved by Manifold.
Get
1,000
and3.00
Related questions
Related questions
Will Anthropic, Google, xAI or Meta release a model that thinks before it responds like o1 from OpenAI by EOY 2024?
51% chance
Will OpenAI, Anthropic, or Google DeepMind suffer a significant security incident by the end of 2024?
25% chance
Will OpenAI announce that they are cooperating with Deepmind, Anthropic, Meta or Google in order to mitigate race dynamics by 2027?
62% chance
Will OpenAI disappear before 2034?
34% chance
Will OpenAI become notably less pro AI safety by start of 2025 than at the start of 2024?
69% chance
Will there be serious AI safety drama at Google or Deepmind before 2026?
60% chance
Will OpenAI pause capabilities R&D voluntarily before 2027?
16% chance
Will OpenAI become notably more pro safety by start of 2025 than before the OpenAI crisis?
18% chance
Will there be a significant AI safety incident involving OpenAI o1 before April 2025?
14% chance
By 2026, will Openai commit to delaying model release if ARC Evals thinks it's dangerous?
28% chance