Additional context:
https://www.lesswrong.com/posts/LhEesPFocr2uT9sPA/safety-timelines-how-long-will-it-take-to-solve-alignment
"Paul Christiano: P(doom from narrow misalignment | no AI safety) = 10%, P( doom from narrow misalignment | 20,000 in AI safety) = 5%"
https://www.lesswrong.com/posts/CoZhXrhpQxpy9xw9y/where-i-agree-and-disagree-with-eliezer?commentId=EG2iJLKQkb2sTcs4o
"I definitely agree that Eliezer's list of lethalities hits many rhetorical and pedagogical beats that other people are not hitting and I'm definitely not hitting. I also agree that it's worth having a sense of urgency given that there's a good chance of all of us dying (though quantitatively my risk of losing control of the universe though this channel is more like 20% than 99.99%, and I think extinction is a bit less less likely still)."
Feb 19, 1:55pm: Will Paul Christiano publicly announce a greater than 10% increase in his p(doom from AGI) within the next 5 years? → Will Paul Christiano publicly announce a greater than 10% increase in his p(doom | AGI before 2100) within the next 5 years?
I think the comments you cite are all Paul talking about chances of doom along more specific paths, and his overall estimates of xrisk are higher
Maybe a total of more like 40% total existential risk from AI this century?
@AnishUpadhayaya6ee What is Paul's current P(doom | AGI before 2100), for reference? What if Paul talks about how his P(doom) has increased but never provides a public quantitative figure?
https://www.lesswrong.com/posts/LhEesPFocr2uT9sPA/safety-timelines-how-long-will-it-take-to-solve-alignment
"Paul Christiano: P(doom from narrow misalignment | no AI safety) = 10%, P( doom from narrow misalignment | 20,000 in AI safety) = 5%"