
What are the probabilities of these AI outcomes (X-risk, dystopias, utopias, in-between outcomes, status quo outcomes)?
60
2.2kṀ27792301
19%
A. Death by paperclips, eternal torment of all humans by AI, or similar unalignment catastrophe.
10%
B. Governments and/or other powerful entities use AI as a tool of repression, enabling global techno-totalitarianism along the model of China during Zero Covid or worse.
8%
C. AI doesn't actively want to hurt us, but (possibly aided by transhumanists) they become obsessed with utility maximization and force us all into mind-uploads and/or experience machines to free up resources for more computronium.
6%
D. AI wipes out most white-collar jobs within a decade and most blue-collar jobs within a generation; powerful humans and/or AIs at least seriously consider disposing of the "useless eaters" en masse, us being powerless to resist.
6%
E. AI wipes out most jobs as in D. No disposing of the human masses, but general perception that AI has made life less meaningful/fulfilling & significantly worsened the human experience on dimensions other than hedonium maximization.
11%
F. AI wipes out most jobs as in D. People not forced into mind-uploads or experience machines. General perception that AI has made life more meaningful/fulfilling&improved the human experience on dimensions other than hedonium maximization.
11%
G. AI development continues but doesn't change things too much, somehow. Most jobs, even low-level white collar jobs, don't get impacted too hard, as new work is found to replace newly automated work. Labor force participation remains high.
5%
H. Humanity coordinates to prevent the development of significantly more powerful AIs.
23%
I. AI soon hits fundamental scaling laws and we go into another AI winter.
Buy/sell these outcomes to the probabilities you consider appropriate.
Mar 25, 6:05am: What are the probabilities of these AI outcomes? → What are the probabilities of these AI outcomes (X-risk, dystopias, utopias, in-between outcomes, status quo outcomes)?
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
People are also trading
Related questions
If someone commits anti-AI-xrisk terrorism, will AI xrisk worries be generally marginalized afterwards?
32% chance
Are AI and its effects are the most important existential risk, given only public information available in 2021?
89% chance
Is the nature of AI risk completely misunderstood today with respect to the state of the art in 2030?
30% chance
In January 2026, how publicly salient will AI deepfakes/media be, vs AI labor impact, vs AI catastrophic risks?
Will "Many arguments for AI x-risk are wrong" make the top fifty posts in LessWrong's 2024 Annual Review?
22% chance
AI generated philosophy book worth reading by the end of 2030 (detailed scenario)?
67% chance
Will "Comparing risk from internally-deployed AI to..." make the top fifty posts in LessWrong's 2025 Annual Review?
14% chance
Will AI cause an existential catastrophe (Bostrom or Ord definition) which doesn't result in human extinction?
25% chance
If humanity survives to 2100, what will experts believe was the correct level of AI risk for us to assess in 2023?
38% chance
By 2028, will I believe that contemporary AIs are aligned (posing no existential risk)?
33% chance