12
122
Ṁ517Ṁ425
2101
1D
1W
1M
ALL
83%
Above 5%
73%
Above 10%
62%
Above 20%
24%
Above 50%
10%
Above 80%
AGI defined as an AI that is better at AI research than the average human AI researcher not using AI.
p(doom) defined as human extinction or outcomes that are similarly bad.
In Katya Grace's 2022 survey, median values were 5% for "extremely bad outcome (e.g., human extinction)” and 5-10% for human extinction.
All answers which are true resolve Yes.
Related:
Get Ṁ200 play money
Sort by:
More related questions
Related questions
At the beginning of 2035, will Eliezer Yudkowsky still believe that AI doom is coming soon with high probability?
61% chance
Will we have at least one more AI winter before AGI is realized?
60% chance
What will be the average P(doom) of AI researchers in 2025?
21% chance
In which year will a majority of AI researchers concur that a superintelligent, fairly general AI has been realized?
When will AI be better than humans at AI research? (Basically AGI)
Conditional on their being no AI takeoff before 2030, will the majority of AI researchers believe that AI alignment is solved?
37% chance
Will the next Millennium Problem be solved by an AI?
45% chance
Will AI surpass humans in conducting scientific research by 2030?
36% chance
when will OpenAI have announced they have achieved AGI?
Who Will Be the First to Reveal Human-Level AGI?