
ML researchers’ median probability of existential risk from AI
14
370Ṁ533Jun 11
23
expected
1H
6H
1D
1W
1M
ALL
What is ML researchers’ median probability of existential risk from AI according to the most recent survey conducted at a major AI conference (NeurIPS, ICML) by June 2025?
The AI impacts report puts the median probability estimate of human extinction by the median ML researcher at about 5% in 2022. How will this change in the next few years?
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
People are also trading
Related questions
The probability of "extremely bad outcomes e.g., human extinction" from AGI will be >5% in next survey of AI experts
75% chance
OpenAI CEO doesn't think existential risk from AI is a serious concern in Jan 2026
27% chance
The probability of extremely good AGI outcomes eg. rapid human flourishing will be >24% in next AI experts survey
59% chance
What will be the median p(doom) of AI researchers after AGI is reached?
At the beginning of 2026, what percentage of Manifold users will believe that an AI intelligence explosion is a significant concern before 2075?
68% chance
Are AI and its effects are the most important existential risk, given only public information available in 2021?
89% chance
What will be the average P(doom) of AI researchers in 2025?
19% chance
By 2028, will I believe that contemporary AIs are aligned (posing no existential risk)?
33% chance
Will AI cause an existential catastrophe (Bostrom or Ord definition) which doesn't result in human extinction?
25% chance
At the beginning of 2027, what percentage of Manifold users will believe that an AI intelligence explosion is a significant concern before 2075?
71% chance