
In a year, will Peter Wildeford believe that AI is the largest single source of existential risk?
36
1.6kṀ7053resolved Aug 18
Resolved
YES1H
6H
1D
1W
1M
ALL
This question is managed and resolved by Manifold.
Market context
Get
1,000 to start trading!
🏅 Top traders
| # | Name | Total profit |
|---|---|---|
| 1 | Ṁ67 | |
| 2 | Ṁ49 | |
| 3 | Ṁ36 | |
| 4 | Ṁ34 | |
| 5 | Ṁ27 |
People are also trading
By end of 2028, will AI be considered a bigger x risk than climate change by the general US population?
61% chance
Will "Ten arguments that AI is an existential risk" make the top fifty posts in LessWrong's 2024 Annual Review?
10% chance
Are AI and its effects are the most important existential risk, given only public information available in 2021?
89% chance
OpenAI CEO doesn't think existential risk from AI is a serious concern in Jan 2026
27% chance
Will "AI Control May Increase Existential Risk" make the top fifty posts in LessWrong's 2025 Annual Review?
15% chance
By 2028, will I believe that contemporary AIs are aligned (posing no existential risk)?
33% chance
Will AI existential risk be mentioned in the white house briefing room again by May 2029?
87% chance
Will humanity wipe out AI x-risk before 2030?
10% chance
In 2050, will the general consensus among experts be that the concern over AI risk in the 2020s was justified?
72% chance
Which book(s) will win Will Kiely's $1,000 bounty by being the new best overview of AI existential risk?
Sort by:
@NathanpmYoung the money's all more or less loaned back at this point - I'm happy waiting indefinitely. (Same applies on all these questions).
People are also trading
Related questions
By end of 2028, will AI be considered a bigger x risk than climate change by the general US population?
61% chance
Will "Ten arguments that AI is an existential risk" make the top fifty posts in LessWrong's 2024 Annual Review?
10% chance
Are AI and its effects are the most important existential risk, given only public information available in 2021?
89% chance
OpenAI CEO doesn't think existential risk from AI is a serious concern in Jan 2026
27% chance
Will "AI Control May Increase Existential Risk" make the top fifty posts in LessWrong's 2025 Annual Review?
15% chance
By 2028, will I believe that contemporary AIs are aligned (posing no existential risk)?
33% chance
Will AI existential risk be mentioned in the white house briefing room again by May 2029?
87% chance
Will humanity wipe out AI x-risk before 2030?
10% chance
In 2050, will the general consensus among experts be that the concern over AI risk in the 2020s was justified?
72% chance
Which book(s) will win Will Kiely's $1,000 bounty by being the new best overview of AI existential risk?