Is there an excessive overlap between belief in "AI extinction risk" and longtermism?
8
Never closes
Yes
No
This question is managed and resolved by Manifold.
Get
1,000
and3.00
Sort by:
I'm not sure what would make the overlap excessive.
Most top experts see existentially dangerous AI as a significant threat in the next 2-20 years. For the few people who think it will take a century or more, if they are longtermists, they'll likely be concerned, and if they aren't longtermists, they are less likely to be concerned. Going the other way, people who are longtermists are more likely to be familiar with AI extinction risk since it used to look like a long-term problem.
The most theoretical AI safety concerns recently stopped being theoretical very suddenly, which is simultaneously a reason for normal people to become very concerned, and a reason for already-concerned longtermists to scream bloody murder.
Related questions
Related questions
By end of 2028, will AI be considered a bigger x risk than climate change by the general US population?
56% chance
In 2050, will the general consensus among experts be that the concern over AI risk in the 2020s was justified?
75% chance
By 2028, will I believe that contemporary AIs are aligned (posing no existential risk)?
33% chance
Is the nature of AI risk completely misunderstood today with respect to the state of the art in 2030?
39% chance
Will Yudkowsky agree that his "death with dignity" post overstated the risk of extinction from AI, by end of 2029?
15% chance
At the beginning of 2026, what percentage of Manifold users will believe that an AI intelligence explosion is a significant concern before 2075?
66% chance
Contingent on AI being perceived as a threat, will humans deliberately cause an AI winter before 2030?
35% chance
Will someone take desperate measures due to expectations of AI-related risks by January 1, 2030?
66% chance
OpenAI CEO doesn't think existential risk from AI is a serious concern in Jan 2026
28% chance
At the beginning of 2028, what percentage of Manifold users will believe that an AI intelligence explosion is a significant concern before 2075?
67% chance