
In 2050, will the general consensus among experts be that the concern over AI risk in the 2020s was justified?
74
1.2kṀ37882051
75%
chance
1D
1W
1M
ALL
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
Sort by:
@jonsimon Neither matters. What this market cares about is "was the probability they placed on the world being destroyed by AI justified by the evidence they had at the time?"
@IsaacKing Whose probability/concern needs to be justified? Laypeople? Computer scientists? Computer scientists who responded to the AI Impact survey? Existential safety advocates / the AI existential risk community? Eliezer Yudkowsky?
I mainly ask because I think the probabilities of, say, extinction would range from something like 5% (maybe laypeople and computer scientists) to 50% (average existential safety advocate) to >99.9% (Yudkowsky).
Related questions
Related questions
By end of 2028, will AI be considered a bigger x risk than climate change by the general US population?
55% chance
If humanity survives to 2100, what will experts believe was the correct level of AI risk for us to assess in 2023?
36% chance
Public opinion, late 2025: Out-of-control AI becoming a threat to humanity, a real threat?
Is the nature of AI risk completely misunderstood today with respect to the state of the art in 2030?
30% chance
At the beginning of 2026, what percentage of Manifold users will believe that an AI intelligence explosion is a significant concern before 2075?
66% chance
Will AI be considered safe in 2030? (resolves to poll)
72% chance
Will >90% of Elon re/tweets/replies on 19 December 2025 be about AI risk?
7% chance
Will humanity wipe out AI x-risk before 2030?
10% chance
In 2050, what will be the most accurate statement about the control of AI?
Will there be a massive catastrophe caused by AI before 2030?
32% chance