
OpenAI CEO doesn't think existential risk from AI is a serious concern in Jan 2026
4
Ṁ90Ṁ40Dec 31
27%
chance
1H
6H
1D
1W
1M
ALL
This question is managed and resolved by Manifold.
Market context
Get
1,000 to start trading!
People are also trading
Related questions
Will "Ten arguments that AI is an existential risk" make the top fifty posts in LessWrong's 2024 Annual Review?
10% chance
By 2028, will I believe that contemporary AIs are aligned (posing no existential risk)?
33% chance
Are AI and its effects are the most important existential risk, given only public information available in 2021?
89% chance
Will AI existential risk be mentioned in the white house briefing room again by May 2029?
87% chance
Will AI xrisk seem to be handled seriously by the end of 2026?
14% chance
Will OpenAI exist in Jan 2027?
95% chance
In 2050, will the general consensus among experts be that the concern over AI risk in the 2020s was justified?
72% chance
Will AI cause an existential catastrophe (Bostrom or Ord definition) which doesn't result in human extinction?
25% chance
Is the nature of AI risk completely misunderstood today with respect to the state of the art in 2030?
30% chance
Will humanity wipe out AI x-risk before 2030?
10% chance