Will the Understanding AI blog mention existential risk as a significant threat before March?
7
Ṁ190Ṁ696resolved Apr 15
Resolved
NO1H
6H
1D
1W
1M
ALL
https://www.understandingai.org
The writer seems intelligent and reasonable, having also written the wonderful Full Stack Economics. But they've been rather dismissive of serious AI concerns, focusing instead on things like self-driving cars and job loss. Will they change their mind?
This question is managed and resolved by Manifold.
Market context
Get
1,000 to start trading!
🏅 Top traders
| # | Trader | Total profit |
|---|---|---|
| 1 | Ṁ66 | |
| 2 | Ṁ25 | |
| 3 | Ṁ16 | |
| 4 | Ṁ16 |
People are also trading
Will "Deep atheism and AI risk" make the top fifty posts in LessWrong's 2024 Annual Review?
98% chance
Are AI and its effects are the most important existential risk, given only public information available in 2021?
89% chance
Will "AI Control May Increase Existential Risk" make the top fifty posts in LessWrong's 2025 Annual Review?
15% chance
Will AI existential risk be mentioned in the white house briefing room again by May 2029?
87% chance
OpenAI CEO doesn't think existential risk from AI is a serious concern in Jan 2026
27% chance
By 2028, will I believe that contemporary AIs are aligned (posing no existential risk)?
33% chance
Will "Planning for Extreme AI Risks" make the top fifty posts in LessWrong's 2025 Annual Review?
14% chance
Will humanity wipe out AI x-risk before 2030?
11% chance
Will "Comparing risk from internally-deployed AI to..." make the top fifty posts in LessWrong's 2025 Annual Review?
14% chance
Is the nature of AI risk completely misunderstood today with respect to the state of the art in 2030?
30% chance
Sort by:
Looking at the last couple months of posts, I don't think he's changed his mind at all. @IsaacKing, I think this can resolve no. Like you I find Timothy to be a very sharp guy who also seems to underestimate AI x-risk.
People are also trading
Related questions
Will "Deep atheism and AI risk" make the top fifty posts in LessWrong's 2024 Annual Review?
98% chance
Are AI and its effects are the most important existential risk, given only public information available in 2021?
89% chance
Will "AI Control May Increase Existential Risk" make the top fifty posts in LessWrong's 2025 Annual Review?
15% chance
Will AI existential risk be mentioned in the white house briefing room again by May 2029?
87% chance
OpenAI CEO doesn't think existential risk from AI is a serious concern in Jan 2026
27% chance
By 2028, will I believe that contemporary AIs are aligned (posing no existential risk)?
33% chance
Will "Planning for Extreme AI Risks" make the top fifty posts in LessWrong's 2025 Annual Review?
14% chance
Will humanity wipe out AI x-risk before 2030?
11% chance
Will "Comparing risk from internally-deployed AI to..." make the top fifty posts in LessWrong's 2025 Annual Review?
14% chance
Is the nature of AI risk completely misunderstood today with respect to the state of the art in 2030?
30% chance