(LW) LLMs as currently trained run ~0 risk of catastrophic instrumental convergence even if scaled 1000x more compute
10
Never closes
Yes
No
Results
This question is managed and resolved by Manifold.
Market context
Get
1,000 to start trading!
People are also trading
Related questions
(long-term) Will existing LLMs have ~0 risk of catastrophic instrumental convergence if scaled up 1000x more compute?
58% chance
Will the best LLM in 2026 have <500 billion parameters?
13% chance
Will the best LLM in 2027 have <250 billion parameters?
12% chance
Will the best LLM in 2027 have <500 billion parameters?
12% chance
[Carlini questions] Will we still use (slight modifications of) transformer-based LLMs we currently use
More than 80% of all user queries to LLMs will be served by LLMs less than 10 Billion parameters in size by 2050?
50% chance
Which High-risk threshold as defined by OpenAI will be reached first by an LLM, whether or not that LLM is released?
Will LLMs be worse than human level at forecasting when they are superhuman at most things?
41% chance
Will LLM inference for the largest models run on analogue circuitry as the primary element of computuation by end 2028?
19% chance
Will second-order optimizers displace first-order optimizers for training LLMs by 2030?
43% chance