What are the probabilities of these AI outcomes (X-risk, dystopias, utopias, in-between outcomes, status quo outcomes)?
➕
Plus
58
Ṁ2719
2301
19%
A. Death by paperclips, eternal torment of all humans by AI, or similar unalignment catastrophe.
11%
B. Governments and/or other powerful entities use AI as a tool of repression, enabling global techno-totalitarianism along the model of China during Zero Covid or worse.
9%
C. AI doesn't actively want to hurt us, but (possibly aided by transhumanists) they become obsessed with utility maximization and force us all into mind-uploads and/or experience machines to free up resources for more computronium.
7%
D. AI wipes out most white-collar jobs within a decade and most blue-collar jobs within a generation; powerful humans and/or AIs at least seriously consider disposing of the "useless eaters" en masse, us being powerless to resist.
7%
E. AI wipes out most jobs as in D. No disposing of the human masses, but general perception that AI has made life less meaningful/fulfilling & significantly worsened the human experience on dimensions other than hedonium maximization.
11%
F. AI wipes out most jobs as in D. People not forced into mind-uploads or experience machines. General perception that AI has made life more meaningful/fulfilling&improved the human experience on dimensions other than hedonium maximization.
12%
G. AI development continues but doesn't change things too much, somehow. Most jobs, even low-level white collar jobs, don't get impacted too hard, as new work is found to replace newly automated work. Labor force participation remains high.
5%
H. Humanity coordinates to prevent the development of significantly more powerful AIs.
18%
I. AI soon hits fundamental scaling laws and we go into another AI winter.

Buy/sell these outcomes to the probabilities you consider appropriate.

Mar 25, 6:05am: What are the probabilities of these AI outcomes? → What are the probabilities of these AI outcomes (X-risk, dystopias, utopias, in-between outcomes, status quo outcomes)?

Get
Ṁ1,000
and
S3.00
Sort by:
D. AI wipes out most white-collar jobs within a decade and most blue-collar jobs within a generation; powerful humans and/or AIs at least seriously consider disposing of the "useless eaters" en masse, us being powerless to resist.

@connorwilliams97 This is one I’d be really concerned about (perhaps instead of wiping out jobs outright, we might see wages fall behind GDP growth), but I think the one-decade timeline is too short. I’d put it at 30-100 years.

Dystopia does not require so not at all. The elements for dystopia exist today.

Anyone know why the graph over time disappeared and how to get it back?

@connorwilliams97 This is a parimutuel market which is now deprecated, I would consider remaking this market with the new and much improved multiple choice market type.

A, B, E, F, G, I will happen, not sure in which order though. Most of the options are not mutually exclusive.

What incentive do I have to bet in this market in any particular way? How is it going to resolve?

@IsaacKing it will resolve as I (nothing), which is the least popular of the plausible options.

@IsaacKing Obviously if A happens it won't resolve since there will be no one around to resolve it. Otherwise, my first attempt to think through how I'd resolve it, perhaps, is that it would be resolved after X years of relatively steady-state of one of the other outcomes. Most of the outcomes are pretty obvious to differentiate; differentiating F from G would be based on a thorough review of a combination of public opinion metrics and social-scientific evidence. Differentiating E from F would be based on whether very powerful people/AIs have seriously talked about it - it being a popular conspiracy theory wouldn't be enough.

No AI ever murdered 100mil people (but ideologies vaguely similar to the aligners did)

No AI ever lowered global IQ by a point per decade (but malaria nets don’t help)

No AI ever advocated for some regulatory moat around itself (but [will not be named] does)

The higher the human IQ, the more they respect the monkey/elephants

(Some if even worship the cows)

The smarter the Ai the safer it is 🤔

I would consider G to hold even if labor force participation drops dramatically for some reason related to AI (e.g., a UBI gets established in most countries, nano replication invented, etc ).

@NLeseul if the UBI causes labor participation to drop and the drop can clearly be traced to the UBI in particular rather than to AI, that's G. But if AI causes labor participation to drop and the UBI is introduced in response to this, then that's very definitively F.

If AI-based nanoreplication puts most people out of work, that's also F.

The core tenet of G is that AI somehow doesn't reduce labor force participation significantly. That's the exact thing that differentiates it from F.

@MrMayhem Ugh. I definitely meant to type "some reason UNrelated to AI" there.

Yes, I agree that if AI is clearly the catalyst that leads to other social or technological changes, then that would indicate G.

Social collapse from peak oil.

@MarkIngraham X. Ai servers use up the remaining economical oil, nerds forced to debate alignment issues in person.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules