
(long-term) Will existing LLMs have ~0 risk of catastrophic instrumental convergence if scaled up 1000x more compute?
4
70Ṁ332043
58%
chance
1H
6H
1D
1W
1M
ALL
from https://www.lesswrong.com/posts/hc9nMipTXy2sm3tJb/vote-on-interesting-disagreements
Resolves whenever there's consensus in the AI field on this
This question is managed and resolved by Manifold.
Market context
Get
1,000 to start trading!
People are also trading
Related questions
Will the best LLM in 2026 have <500 billion parameters?
13% chance
Will the best LLM in 2027 have <500 billion parameters?
12% chance
Will the best LLM in 2027 have <250 billion parameters?
12% chance
More than 80% of all user queries to LLMs will be served by LLMs less than 10 Billion parameters in size by 2050?
50% chance
Will the highest-scoring LLM on Dec 31, 2026 show <10% improvement over 2025's best average benchmark performance?
72% chance
[Carlini questions] Will we still use (slight modifications of) transformer-based LLMs we currently use
Will there be any major breakthrough in LLM continual learning before 2030?
85% chance
Will any LLM be able to multiply together arbitrary decimal numbers by the end of 2027?
69% chance
Will LLM training costs fall 1,000x by 2028?
74% chance
Will Landauer's limit be reached (to within a factor of 10) in practical computing devices before 2050?
17% chance