My current estimate of P(Doom) is 35%. I believe that ASI will be invented in the near future, but that alignment is tractable enough that this will most likely end well for humanity. I believe that an international treaty banning frontier AI development, like Yudkowsky advocates, is extremely unlikely to happen and would probably not work to reduce P(Doom). I have read most of Yudkowsky's work, including HPMOR, the Sequences, and much of LessWrong.
My preorder of "If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All" was delivered while I was writing this question. I plan to read it over the next week, but might need a few weeks.
Given all that info, feel free to speculate on how reading this book will affect my P(Doom). I plan on talking about the book and reading what other people think about it, but will not look for anything else that might change my view on P(Doom).
If I do not finish reading the book by 10/20, I will resolve this N/A. If there is a significant change in my P(Doom) not related to this book, I will also resolve N/A.