If we survive general artificial intelligence, what will be the reason?

This market resolves once either of the following are true:

  • AI seems about as intelligent as it's ever plausibly going to get.

  • There appears to be no more significant danger from AI.

It resolves to the option that seems closest to the explanation of why we didn't all die. If multiple reasons seem like they all significantly contributed, I may resolve to a mix among them.

If you want to know what option a specific scenario would fall under, describe it to me and we'll figure out what it seems closest to. If you think this list of reasons isn't exhaustive, or is a bad way to partition the possibility space, feel free to suggest alternatives.

IsaacKing avatarHumanity coordinates to prevent the creation of potentially-unsafe AIs.
29%
IsaacKing avatarThere was an alignment breakthrough allowing humanity to successfully build an aligned AI.
23%
IsaacKing avatarHigh intelligence isn't enough to take over the world on its own, so the AI needs to work with humanity in order to effectively pursue its own goals.
21%
IsaacKing avatarOne person (or a small group) takes over the world and acts as a benevolent dictator.
13%
IsaacKing avatarAt a sufficient level of intelligence, goals converge towards not wanting to harm other creatures/intelligences.
5%
IsaacKing avatarMultiple competing AIs form a stable equilibrium keeping each other in check.
5%
IsaacKing avatarThere's a fundamental limit to intelligence that isn't much higher than human level.
3%
IsaacKing avatarBuilding GAI is impossible because human minds are special somehow.
0.7%
Sort by:
PatrickDelaney avatar
Patrick Delaneybought Ṁ10 of High intelligence is...

Where are the killer robots going to get their cobalt? Have you not seen how cobalt is mined?

DavidMathers avatar
David Mathers

I feel a missing option here is 'we build powerful intelligences that are not agents with goals', a la the proposal here: https://www.fhi.ox.ac.uk/wp-content/uploads/Reframing_Superintelligence_FHI-TR-2019-1.1-1.pdf Or would that count as a mixture of 'Humanity coordinates to prevent the creation of potentially-unsafe AIs' and 'There was an alignment breakthrough allowing humanity to successfully build an aligned AI'

IsaacKing avatar
Isaac King

@DavidMathers Sounds like it, yeah. Having survived some AGIs doesn't explain why others haven't killed us, so this market will resolve based on the explanation for the latter.

Duncn avatar
Duncn

At a sufficient level of intelligence, goals converge towards wireheading.

MartinRandall avatar
Martin Randall

@Duncan At a sufficient level of intelligence wireheading converges towards genocide.

Duncn avatar
Duncn

@MartinRandall Only if you push wireheading on others. Or if pleasure is scalable. But a simple desire to max out current pleasure ability should result in something more akin to suicide.

MartinRandall avatar
Martin Randall

@Duncan I think with a satisficer you are right, but an optimizer is worried about grabby aliens or unfriendly AIs disturbing its bliss and takes over the solar system to protect its wires.

NoaNabeshima avatar
Noa Nabeshima

@MartinRandall Optimizing to max out current pleasure is a valid target for optimization.

MartinRandall avatar
Martin Randall

@NoaNabeshima I can't optimize my current pleasure, my actions only affect the future.

MartinRandall avatar
Martin Randall

@NoaNabeshima but maybe with hyperbolic discounting one would not need to conquer as much.

NoaNabeshima avatar
Noa Nabeshima

Access to most powerful AI systems is fairly centralized among a small number of government(s) or compan(y/ies). These AI system(s) are aligned by default and the governments/companies/AIs do something to ensure that bad things don't happen.

MartinRandall avatar
Martin Randall

@NoaNabeshima "aligned by default"?

NoaNabeshima avatar
Noa Nabeshima

@MartinRandall Something like @ThomasKwa 's

> I think the most likely reason is some combination of "alignment isn't that hard and attempts to instill beneficial goals into AIs generalize well, despite no clear breakthrough" and "we can train somewhat goal-directed systems to be somewhat corrigible, even at high intelligence levels".

MartinRandall avatar
Martin Randall

@NoaNabeshima oh, I wouldn't call that aligned by default, because it still requires some effort. Eg, reading a book is easy but it doesn't happen by default.

I'm not sure whether this would be enough to prevent human extinction. The first possibility implies that malicious goals generalize well, and in the second possibility a corrigible AI may struggle to defend against a malicious AI.

NoaNabeshima avatar
Noa Nabeshima

@MartinRandall If in easily-alignable world the most powerful AI systems at any time are controlled by a small number of actors, those small numbers of actors might be able to do something to reduce the odds that bad thing happen by other weaker actors with intentionally-malicious systems. EG by making it very difficult to train new, powerful AI systems.

KTGeorge avatar
KT Georgebought Ṁ100 of High intelligence is...

I think AI Alignment is too difficult, we build AI smart enough to realise if smarter AI gets built it won't align with its own values, so it stops further AI development, and if we survive its because a smart AI finds it easier to stop AI development than kill all humans

fela avatar
Fela

We got lucky and AI that escaped and became all powerful happened to be aligned enough. Us surviving happened to be (part of) his goal

MartinRandall avatar
Martin Randall

Humans successfully exterminated, there is no more significant danger to humans from AI.

TylerColeman avatar
Tyler Coleman

@MartinRandall Ah, yes, the ole "can't break what doesn't exist". We're going to have to teach our AIs to have the courage to believe that we can potentially coexist.

MartinRandall avatar
Martin Randall

With Folded Hands

MartinRandall avatar
Martin Randall

AI researchers keep randomly dying, eventually we realize that the field is cursed.

TylerColeman avatar
Tyler Coleman

@MartinRandall That seems like a pretty good solution to the paradox of time travel, if it were real. Time travel could effectively prevent its own proliferation by statistically unlikely events. (Though why it would is a mystery.)

MartinRandall avatar
Martin Randall

@TylerColeman Absolutely. Not my idea: https://www.lesswrong.com/posts/o5F2p3krzT4JgzqQc/causal-universes (see section on time turners and the game of life).

This is an extension to immortality, where all surviving Martins must experience increasingly improbable universes where AI researchers randomly die and life extension researchers do not. It probably doesn't work.

ThomasKwa avatar
Thomas Kwa

I think the most likely reason is some combination of "alignment isn't that hard and attempts to instill beneficial goals into AIs generalize well, despite no clear breakthrough" and "we can train somewhat goal-directed systems to be somewhat corrigible, even at high intelligence levels".

MartinRandall avatar
Martin Randall

Race dynamics cause an AI to launch an attack on other AIs before it is ready and it ends up destroying and preventing all AIs including itself without exterminating all humans.

JimHays avatar
Jim Hays

How about: Society collapses first for some other reason?

IsaacKing avatar
Isaac King

@JimHays That wouldn't count as "surviving GAI", since we never really got to that point.

JimHays avatar
Jim Hays

@IsaacKing What if, for example we did develop AGI, but then we got an unrelated pandemic that killed 80% of people, and caused the collapse of systems to maintain AGI? Would that be in the “High intelligence isn’t enough…” bucket?

IsaacKing avatar
Isaac King

@JimHays It would depend on why we didn't all die in the span of time between AGI and the pandemic.

NamesAreHard avatar
NamesAreHard

It seems like the most direct answer here is that if we "survive", then it's because quantum immortality turns out to be true. The question is still valid for what will have happened in that surviving branch, though :)