Duplicate of this old market, with some alterations.
My current estimate of P(Doom) is 1%. I think that ASI will be invented in the near future, but that alignment-by-default (in Yud's terms) is very probable, with actual misalignment taking the form of catastrophic risk from empowered threat-actors or by leading to sub-optimal value lock-in. I have read HPMOR and enjoyed it and am fairly familiar with Yudkowky's arguments for doom. But I also found Will Macaskill's review compelling.
Any increase counts. For the "Increases___" questions, I'm using my current average estimates so that I can be easily swayed one way or another.
I don't expect outside events or arguments to affect my P(doom) before I finish reading the book, but if they do, I'll attempt to disentangle the effects from the book.
If commentators want to defend/criticize the arguments as presented in the book, I will consider this relevant.
And while I will not bet myself, betters are welcome to ask for updates as I complete chapters. I will try to finish it by 6/16/2026, but I'll resolve earlier if I can.