Skip to main content
MANIFOLD
How will reading "If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All" affect my P(Doom)?
1
á¹€125á¹€30
Jun 16
34%
Increases P(Doom)
34%
No significant change.
34%
Decreases P(Doom)
50%
Increases P(ASI before 2038)?
50%
Increases P(Pause by international treaty within 4 years is desirable)?

Duplicate of this old market, with some alterations.
My current estimate of P(Doom) is 1%. I think that ASI will be invented in the near future, but that alignment-by-default (in Yud's terms) is very probable, with actual misalignment taking the form of catastrophic risk from empowered threat-actors or by leading to sub-optimal value lock-in. I have read HPMOR and enjoyed it and am fairly familiar with Yudkowky's arguments for doom. But I also found Will Macaskill's review compelling.

Any increase counts. For the "Increases___" questions, I'm using my current average estimates so that I can be easily swayed one way or another.

  • I don't expect outside events or arguments to affect my P(doom) before I finish reading the book, but if they do, I'll attempt to disentangle the effects from the book.

  • If commentators want to defend/criticize the arguments as presented in the book, I will consider this relevant.

  • And while I will not bet myself, betters are welcome to ask for updates as I complete chapters. I will try to finish it by 6/16/2026, but I'll resolve earlier if I can.

Market context
Get
á¹€1,000
to start trading!