ML researchers’ median probability of existential risk from AI
7
48
370
2025
25
expected

What is ML researchers’ median probability of existential risk from AI according to the most recent survey conducted at a major AI conference (NeurIPS, ICML) by June 2025?

The AI impacts report puts the median probability estimate of human extinction by the median ML researcher at about 5% in 2022. How will this change in the next few years?

Get Ṁ200 play money
Sort by:
bought Ṁ10 of LOWER

Once this market settles somewhere, you could make a binary over/under market to get much more engagement.

people don’t feel compelled to bet in these markets because if the realistic range is 5-35%, your max win is pretty small for the amount of mana you need to commit

Winner takes all over/under markets are more exciting, or a multi choice market with buckets, e.g. possible answers of

  • 0-10%

  • 11-20%

  • 21-30%

  • 31-40%

  • 41%+

Which still gives a winner-takes all rather than a 5-25% payout

bought Ṁ20 of LOWER

An informal poll within the channel AISafety@Neurips2023 on the whova app with 58 responses puts p(doom) at 32%. The poll participants may select only a bin in the form 0-20, 21-40, etc… and I took the weighted average of the centers of the bins. This leads to an overestimate for two reasons: 1) selection bias (these are ML safety people -not any ML people- who choose to respond to a poll) 2) the 0-20 bin holds roughly 50% of respondents, and I suspect that most of them actually value p(doom) at near 0, but they get counted at 10%.

More related questions