Is there an excessive overlap between belief in "AI extinction risk" and longtermism?
8
Never closes
Yes
No
This question is managed and resolved by Manifold.
Market context
Get
1,000 to start trading!
Sort by:
I'm not sure what would make the overlap excessive.
Most top experts see existentially dangerous AI as a significant threat in the next 2-20 years. For the few people who think it will take a century or more, if they are longtermists, they'll likely be concerned, and if they aren't longtermists, they are less likely to be concerned. Going the other way, people who are longtermists are more likely to be familiar with AI extinction risk since it used to look like a long-term problem.
The most theoretical AI safety concerns recently stopped being theoretical very suddenly, which is simultaneously a reason for normal people to become very concerned, and a reason for already-concerned longtermists to scream bloody murder.
People are also trading
Related questions
By 2028, will I believe that contemporary AIs are aligned (posing no existential risk)?
33% chance
In 2050, will the general consensus among experts be that the concern over AI risk in the 2020s was justified?
72% chance
OpenAI CEO doesn't think existential risk from AI is a serious concern in Jan 2026
27% chance
Will AI cause an existential catastrophe (Bostrom or Ord definition) which doesn't result in human extinction?
25% chance
Will Yudkowsky agree that his "death with dignity" post overstated the risk of extinction from AI, by end of 2029?
18% chance
How much AI extinction risk would you accept?
15% chance
Will humanity wipe out AI x-risk before 2030?
11% chance
Are AI and its effects are the most important existential risk, given only public information available in 2021?
89% chance
Will AI cause human extinction before 2100 (and how)?
Is the nature of AI risk completely misunderstood today with respect to the state of the art in 2030?
30% chance