![](/_next/image?url=https%3A%2F%2Ffirebasestorage.googleapis.com%2Fv0%2Fb%2Fmantic-markets.appspot.com%2Fo%2Fdream%252FDuBl0CwE1b.png%3Falt%3Dmedia%26token%3D0034a43d-d15e-4b39-9779-10e4c8cb0a33&w=3840&q=75)
Additional context:
https://www.lesswrong.com/posts/LhEesPFocr2uT9sPA/safety-timelines-how-long-will-it-take-to-solve-alignment
"Paul Christiano: P(doom from narrow misalignment | no AI safety) = 10%, P( doom from narrow misalignment | 20,000 in AI safety) = 5%"
https://www.lesswrong.com/posts/CoZhXrhpQxpy9xw9y/where-i-agree-and-disagree-with-eliezer?commentId=EG2iJLKQkb2sTcs4o
"I definitely agree that Eliezer's list of lethalities hits many rhetorical and pedagogical beats that other people are not hitting and I'm definitely not hitting. I also agree that it's worth having a sense of urgency given that there's a good chance of all of us dying (though quantitatively my risk of losing control of the universe though this channel is more like 20% than 99.99%, and I think extinction is a bit less less likely still)."
Feb 19, 1:55pm: Will Paul Christiano publicly announce a greater than 10% increase in his p(doom from AGI) within the next 5 years? → Will Paul Christiano publicly announce a greater than 10% increase in his p(doom | AGI before 2100) within the next 5 years?
Related questions
I think the comments you cite are all Paul talking about chances of doom along more specific paths, and his overall estimates of xrisk are higher
Maybe a total of more like 40% total existential risk from AI this century?
@AnishUpadhayaya6ee What is Paul's current P(doom | AGI before 2100), for reference? What if Paul talks about how his P(doom) has increased but never provides a public quantitative figure?
https://www.lesswrong.com/posts/LhEesPFocr2uT9sPA/safety-timelines-how-long-will-it-take-to-solve-alignment
"Paul Christiano: P(doom from narrow misalignment | no AI safety) = 10%, P( doom from narrow misalignment | 20,000 in AI safety) = 5%"