(LW) LLMs as currently trained run ~0 risk of catastrophic instrumental convergence even if scaled 1000x more compute
10
Never closes
Yes
No
Results
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
People are also trading
Related questions
(long-term) Will existing LLMs have ~0 risk of catastrophic instrumental convergence if scaled up 1000x more compute?
58% chance
Will the best LLM in 2025 have <1 trillion parameters?
42% chance
Will the best LLM in 2026 have <1 trillion parameters?
40% chance
Will the best LLM in 2027 have <1 trillion parameters?
26% chance
Will the best LLM in 2025 have <500 billion parameters?
23% chance
Will the best LLM in 2026 have <500 billion parameters?
27% chance
Will the best LLM in 2027 have <250 billion parameters?
12% chance
Will the best LLM in 2027 have <500 billion parameters?
13% chance
[Carlini questions] Will we still use (slight modifications of) transformer-based LLMs we currently use
Will RL work for LLMs "spill over" to the rest of RL by 2026?
35% chance