Is there an excessive overlap between belief in "AI extinction risk" and longtermism?
8
Never closes
Yes
No
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
Sort by:
I'm not sure what would make the overlap excessive.
Most top experts see existentially dangerous AI as a significant threat in the next 2-20 years. For the few people who think it will take a century or more, if they are longtermists, they'll likely be concerned, and if they aren't longtermists, they are less likely to be concerned. Going the other way, people who are longtermists are more likely to be familiar with AI extinction risk since it used to look like a long-term problem.
The most theoretical AI safety concerns recently stopped being theoretical very suddenly, which is simultaneously a reason for normal people to become very concerned, and a reason for already-concerned longtermists to scream bloody murder.
Related questions
Related questions
Will >90% of Elon re/tweets/replies on 19 December 2025 be about AI risk?
10% chance
By 2028, will I believe that contemporary AIs are aligned (posing no existential risk)?
33% chance
In 2050, will the general consensus among experts be that the concern over AI risk in the 2020s was justified?
77% chance
At the beginning of 2026, what percentage of Manifold users will believe that an AI intelligence explosion is a significant concern before 2075?
66% chance
OpenAI CEO doesn't think existential risk from AI is a serious concern in Jan 2026
27% chance
Will Trump repeatedly raise concerns about existential risk from AI before the end of 2025?
10% chance
Will "Ten arguments that AI is an existential risk" make the top fifty posts in LessWrong's 2024 Annual Review?
14% chance
Will AI cause an existential catastrophe (Bostrom or Ord definition) which doesn't result in human extinction?
25% chance
Will Yudkowsky agree that his "death with dignity" post overstated the risk of extinction from AI, by end of 2029?
18% chance
How much AI extinction risk would you accept?
15% chance