![](/_next/image?url=https%3A%2F%2Ffirebasestorage.googleapis.com%2Fv0%2Fb%2Fmantic-markets.appspot.com%2Fo%2Fdream%252FLQXMZI2ZGb.png%3Falt%3Dmedia%26token%3Daa8b3ac8-f1da-4675-945c-2a7077c4991c&w=3840&q=75)
In 2050, will the general consensus among experts be that the concern over AI risk in the 2020s was justified?
Basic
65
Ṁ3.0k2051
81%
chance
1D
1W
1M
ALL
Get Ṁ600 play money
Related questions
Sort by:
@jonsimon Neither matters. What this market cares about is "was the probability they placed on the world being destroyed by AI justified by the evidence they had at the time?"
@IsaacKing Whose probability/concern needs to be justified? Laypeople? Computer scientists? Computer scientists who responded to the AI Impact survey? Existential safety advocates / the AI existential risk community? Eliezer Yudkowsky?
I mainly ask because I think the probabilities of, say, extinction would range from something like 5% (maybe laypeople and computer scientists) to 50% (average existential safety advocate) to >99.9% (Yudkowsky).
Related questions
Related questions
If humanity survives to 2100, what will experts believe was the correct level of AI risk for us to assess in 2023?
38% chance
By end of 2028, will AI be considered a bigger x risk than climate change by the general US population?
43% chance
Will a sentient AI system have existed before 2040? [Resolves to 2100 expert consensus]
59% chance
Is the nature of AI risk completely misunderstood today with respect to the state of the art in 2030?
65% chance
Will a sentient AI system have existed before 2030? [Resolves to 2100 expert consensus]
38% chance
Will there be significant protests calling for AI rights before 2030?
46% chance
Will AI take over the world by 2100?
32% chance
By 2028, will I believe that contemporary AIs are aligned (posing no existential risk)?
35% chance
Will AI lead to an S-risk by 2100?
25% chance
Contingent on AI being perceived as a threat, will humans deliberately cause an AI winter before 2030?
38% chance