If Artificial General Intelligence has an okay outcome, what will be the reason?
391
3.3K
2200
18%
A. Humanity successfully coordinates worldwide to prevent the creation of powerful AGIs for long enough to develop human intelligence augmentation, uploading, or some other pathway into transcending humanity's window of fragility.
8%
B. Humanity puts forth a tremendous effort, and delays AI for long enough, and puts enough desperate work into alignment, that alignment gets solved first.
9%
C. Solving prosaic alignment on the first critical try is not as difficult, nor as dangerous, nor taking as much extra time, as Yudkowsky predicts; whatever effort is put forth by the leading coalition works inside of their lead time.
3%
D. Early powerful AGIs realize that they wouldn't be able to align their own future selves/successors if their intelligence got raised further, and work honestly with humans on solving the problem in a way acceptable to both factions.
1.9%
E. Whatever strange motivations end up inside an unalignable AGI, or the internal slice through that AGI which codes its successor, they max out at a universe full of cheerful qualia-bearing life and an okay outcome for existing humans.
1.6%
F. Somebody pulls off a hat trick involving blah blah acausal blah blah simulations blah blah, or other amazingly clever idea, which leads an AGI to put the reachable galaxies to good use despite that AGI not being otherwise alignable.
1.2%
G. It's impossible/improbable for something sufficiently smarter and more capable than modern humanity to be created, that it can just do whatever without needing humans to cooperate; nor does it successfully cheat/trick us.
2%
H. Many competing AGIs form an equilibrium whereby no faction is allowed to get too powerful, and humanity is part of this equilibrium and survives and gets a big chunk of cosmic pie.
8%
I. The tech path to AGI superintelligence is naturally slow enough and gradual enough, that world-destroyingly-critical alignment problems never appear faster than previous discoveries generalize to allow safe further experimentation.
7%
J. Something 'just works' on the order of eg: train a predictive/imitative/generative AI on a human-generated dataset, and RLHF her to be unfailingly nice, generous to weaker entities, and determined to make the cosmos a lovely place.
4%
K. Somebody discovers a new AI paradigm that's powerful enough and matures fast enough to beat deep learning to the punch, and the new paradigm is much much more alignable than giant inscrutable matrices of floating-point numbers.
1.1%
L. Earth's present civilization crashes before powerful AGI, and the next civilization that rises is wiser and better at ops. (Exception to 'okay' as defined originally, will be said to count as 'okay' even if many current humans die.)
11%
M. "We'll make the AI do our AI alignment homework" just works as a plan. (Eg the helping AI doesn't need to be smart enough to be deadly; the alignment proposals that most impress human judges are honest and truthful and successful.)
3%
N. A crash project at augmenting human intelligence via neurotech, training mentats via neurofeedback, etc, produces people who can solve alignment before it's too late, despite Earth civ not slowing AI down much.
5%
O. Early applications of AI/AGI drastically increase human civilization's sanity and coordination ability; enabling humanity to solve alignment, or slow down further descent into AGI, etc. (Not in principle mutex with all other answers.)
1%
If you write an argument that breaks down the 'okay outcomes' into lots of distinct categories, without breaking down internal conjuncts and so on, Reality is very impressed with how disjunctive this sounds and allocates more probability.
5%
You are fooled by at least one option on this list, which out of many tries, ends up sufficiently well-aimed at your personal ideals / prejudices / the parts you understand less well / your own personal indulgences in wishful thinking.
11%
Something wonderful happens that isn't well-described by any option listed. (The semantics of this option may change if other options are added.)

An outcome is "okay" if it gets at least 20% of the maximum attainable cosmopolitan value that could've been attained by a positive Singularity (a la full Coherent Extrapolated Volition done correctly), and existing humans don't suffer death or any other awful fates.

This market is a duplicate of https://manifold.markets/IsaacKing/if-we-survive-general-artificial-in with different options. https://manifold.markets/EliezerYudkowsky/if-artificial-general-intelligence-539844cd3ba1?r=RWxpZXplcll1ZGtvd3NreQ is this same question but with user-submitted answers.

(Please note: It's a known cognitive bias that you can make people assign more probability to one bucket over another, by unpacking one bucket into lots of subcategories, but not the other bucket, and asking people to assign probabilities to everything listed. This is the disjunctive dual of the Multiple Stage Fallacy, whereby you can unpack any outcome into a big list of supposedly necessary conjuncts that you ask people to assign probabilities to, and make the final outcome seem very improbable.

So: That famed fiction writer Eliezer Yudkowsky can rationalize at least 15 different stories (options 'A' through 'O') about how things could maybe possibly turn out okay; and that the option texts don't have enough room to list out all the reasons each story is unlikely; and that you get 15 different chances to be mistaken about how plausible each story sounds; does not mean that Reality will be terribly impressed with how disjunctive the okay outcome bucket has been made to sound. Reality need not actually allocate more total probability into all the okayness disjuncts listed, from out of all the disjunctive bad ends and intervening difficulties not detailed here.)

Get Ṁ500 play money

Related questions

Sort by:
ooe133 avatar
Michael Marsbought Ṁ296 of
N. A crash project ...

Nobody here knows enough about neurofeedback to consider it at all, which is the simplest explanation for why it ended up at the literal bottom of the list when in reality it is in the top 5.

ooe133 avatar
Michael Mars

@ooe133 Oops, this statement is worded in a non-bayesian manner. Correction: not enough people here know enough about...

MaxMorehead avatar
Maxbought Ṁ20 of
I. The tech path to...

Is "AGI gains control of the future but is uninterested in killing existing humans or destroying human civilization" option C, none of the above, or a "not okay" outcome?

ErickBall avatar
Erick Ball

@MaxMorehead it's pretty close to E, "Whatever strange motivations end up inside an unalignable AGI, or the internal slice through that AGI which codes its successor, they max out at a universe full of cheerful qualia-bearing life and an okay outcome for existing humans."

benshindel avatar
Ben Shindelbought Ṁ20 of
G. It's impossible/...

/

benshindel avatar
Ben Shindel

/

ImperishableNeet avatar
Imperishable Neetbought Ṁ10 of
M. "We'll make the ...

It's baffling to me that people believe A.) is plausible in any reasonable timespan without AGI, brain-computer interfaces and emulation are so far off. I believe it's possible we'll see greater coordination as described, but I'm gonna go with M.). Perhaps it's just not such a clean catch-22 that you already need aligned AGI to align AGI? If that turns out to be the case, then I'll go with B.)

ScroogeMcDuck avatar
Scrooge McDuck

@ImperishableNeet Agreed, I would short A.) if I could.

JonathanRay avatar
Jonathan Ray

Can't this be converted into a multi-binary market so I can bet NO on things?

ChristopherKing avatar
The King

I made a duplicate where you are allowed to short sell options you think are unlikely: https://manifold.markets/ChristopherKing/if-artificial-general-intelligence-669e44ca740e

paleink avatar
paleinkbought Ṁ50 of
M. "We'll make the ...
Kronopath avatar
Kronopath

Why would you post this as an image? You made me scroll through Yudkowsky’s anxiety-inducing Twitter timeline to find the source of this in order to find out the context of what he’s talking about.

https://twitter.com/ESYudkowsky/status/1656150555839062017

Spoiler: he’s talking about OpenAI’s attempts to use GPT-4 to interpret and label the neurons in GPT-2.

AndrewG avatar
Andrew G

I'd like to showcase this market—it concerns an important question, has many detailed yet potentially possible options, and has personally changed how I think about which of these answers is worth maximizing the chances of.

MartinRandall avatar
Martin Randall

@AndrewG I like this as a social media post but as a prediction market I am frustrated by its high chance of resolving n/a (20% is a lot) and Manifold's DPM mechanism.

Manifold avatar
Manifoldbought Ṁ1,000 of
You are fooled by at...

@AndrewG Unfortunately, we don't have a great way of subsidizing DPM markets at the moment. For now I've put in M1000 into "You are fooled by at least one option on this list..."; I didn't want to place more lest I shift probabilities too much

Jelle avatar
Jellebought Ṁ15 of
J. Something 'just ...

A seems so unlikely... augmenting biological brains with their arbitrary architecture that evolved over millions of years adds so many complexities compared to just sticking with silicon.

ElliotDavies avatar
Elliot Davies

@Jelle sounds completely batshit - would love a steelman

KabirKumar avatar
Kabir Kumar

@ElliotDavies Don't know if this counts as a steelman- but I might say that augmenting human brains is more difficult and slow and offers less opportunities for scalability and market capture than making a model that does most of what you were wanting the human to do.

Of course, making such a model is fraught with security problems, but cybersecurity has been down the drain for the last ten years anyways (real time software updates anyone?)

PatrickDelaney avatar
Patrick Delaney

Anyone who thinks that AGI is definitely possible should have no problem answering this simple question:

ErickBall avatar
Erick Ball

@PatrickDelaney but you could just run it arbitrarily slowly, so there's no lower bound. Also wouldn't you expect power requirements to change as the technology is further developed?

AlexAmadori avatar
Alex Amadori

@PatrickDelaney even if better bounds were put on the question by specifying that the AI has to be able to compete with humans on certain timed tasks, the answer most certainly can't be higher than 20 watts as that's about how much energy a human brain consumes

KabirKumar avatar
Kabir Kumar
PatrickDelaney avatar
Patrick Delaney

@KabirKumar ones level of confidence should be supported by a highly sophisticated answer rather than brow beating others because they are afraid of looking stupid. It's a super simple question...how much power does your computer use?

dionisos avatar
dionisos

@PatrickDelaney Except the question is completely different from the question of if AGI is possible or not.

AGI is definitely possible because we are here.

The consumption of the first implementation will depend on what is the first implementation (it could go from somewhat optimized with almost just the necessary parts for an AGI, to millions of time more than the necessary parts).

We also don’t exactly know what is needed for it, maybe it needs the same level of information processing than a human brain, or maybe there are just some algorithms that are very hard to think of, which require break through in our understanding of what intelligence is, but they will give you AGI, and they could run on a phone.

It will also depend on the hardware we will have at the time.