Is there an excessive overlap between belief in "AI extinction risk" and longtermism?
8
Never closes
Yes
No

Get
Ṁ1,000
and
S3.00
Sort by:

I'm not sure what would make the overlap excessive.

Most top experts see existentially dangerous AI as a significant threat in the next 2-20 years. For the few people who think it will take a century or more, if they are longtermists, they'll likely be concerned, and if they aren't longtermists, they are less likely to be concerned. Going the other way, people who are longtermists are more likely to be familiar with AI extinction risk since it used to look like a long-term problem.

The most theoretical AI safety concerns recently stopped being theoretical very suddenly, which is simultaneously a reason for normal people to become very concerned, and a reason for already-concerned longtermists to scream bloody murder.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules