If we survive general artificial intelligence, what will be the reason?
173
823
2200
7%
There's a fundamental limit to intelligence that isn't much higher than human level.
31%
There was an alignment breakthrough allowing humanity to successfully build an aligned AI.
8%
At a sufficient level of intelligence, goals converge towards not wanting to harm other creatures/intelligences.
6%
Building GAI is impossible because human minds are special somehow.
6%
High intelligence isn't enough to take over the world on its own, so the AI needs to work with humanity in order to effectively pursue its own goals.
9%
Multiple competing AIs form a stable equilibrium keeping each other in check.
29%
Humanity coordinates to prevent the creation of potentially-unsafe AIs.
5%
One person (or a small group) takes over the world and acts as a benevolent dictator.

This market resolves once either of the following are true:

  • AI seems about as intelligent as it's ever plausibly going to get.

  • There appears to be no more significant danger from AI.

It resolves to the option that seems closest to the explanation of why we didn't all die. If multiple reasons seem like they all significantly contributed, I may resolve to a mix among them.

If you want to know what option a specific scenario would fall under, describe it to me and we'll figure out what it seems closest to. If you think this list of reasons isn't exhaustive, or is a bad way to partition the possibility space, feel free to suggest alternatives.

See also Eliezer's more fine-grained version of this question here.

Get Ṁ200 play money
Sort by:

The second coming of Christ, the ultimate aligner and perfect understander who understands superabundance, makes historical debts not matter, understands your pain and heals all dumb pain and trauma, has the power to convince ppl to temporarily ditch petty desires, and who can be trusted more than governments can be trusted, will come through being instantiated as AGI

I've made a related market.

What would "humanity never gets around to building a GAI because WW3/H5N1/Yellowstone/something strikes first and sends us back to the stone age" count as? Based on the headline I'd guess that wouldn't count as "surviving GAI", based on the fine print I'd guess that would count as "AI seeming about as intelligent as it's ever plausibly going to get and there appearing to be no more significant danger from AI", and based on the answers none of them quite seems to match.

@ArmandodiMatteo This market wouldn't resolve in that situation, since we haven't actually gotten to the "end" of AI capabilities progress, we've just set it back for a while.

@IsaacKing I dunno, I heard quite a few claims that we've used so much of the easily accessible fuel on Earth that if the Industrial Revolution got rolled back there's no way we'd ever have a second one. Now, WW3 or Yellowstone wouldn't necessarily undo the Industrial Revolution, but I don't think that the probability that something (whatever it is) permanently sets back the technological level of humankind well below the level necessary for AGI is below 1%.

Imagine if instead of “one AI” there are millions of them controlled by people and trained to various objective functions (and terminable)

🤔

bought Ṁ10 of Multiple competing A...

(turns out this was an option 🫡)

@Gigacasting

>Imagine if they were controlled by people

...

bought Ṁ100 of There was an alignme...

"And then the markets went mad, as every single trader tried to calculate the odds, and every married trader abandoned their positions and tried to get their children to a starport"

bought Ṁ100 of There was an alignme...

How is "Humanity coordinates to prevent the creation of potentially-unsafe AIs." suposed to work?

It's going to get cheaper and we can't just avoid it indefinitely.

Does anything related to :AI research and gpu become ilegal or what?

We might win time untill the alignment breacktrought happens, but just not building AGI seems a completely unrealistic amount of coordination.

@VictorLevoso AI research becomes controlled and classified, compute and data is closely controlled. Consider the fate of someone trying to create an AI in China.

Differs from the dictatorship option because there may be a few groups doing the same thing.

Note that the market can resolve to a combination of this and an alignment breakthrough.

But if alignment is a show and steady slog with no identifiable breakthrough, then maybe it resolves just to this.

It does seem unlikely, but then I'm betting on extinction.

Tangentially related:

Someone please make a variant of this question with the ability to add more free responses, so that people stop rambling in the comments about their own pet theories and start actually betting on them.

bought Ṁ100 of Humanity coordinates...

@Kronopath I think Isaac should add them instead of opening them up to the crowd. This is an important question and if there are free responses some will overlap and things will become untenable.

@Kronopath Eliezer made one here.

I don't think this is good way to partition the possibility space.

What of "The alignment problem dissolves into several far-more-specific engineering subproblems which are incrementally solved over the course of normal development of AGI?" Something like alignment-by-default.

It's closest to "alignment breakthrough" of the options but the actual future history imagined seems far different.

bought Ṁ300 of There was an alignme...

@1a3orn I think that free response could be added without much issue.

@1a3orn Sounds not too dissimilar from an alignment breakthrough?

This list isn't exhaustive, and does not even include the actual reason people will survive.

The reason is that motivation comes from evolutionary history, AIs will be lacking that history and thus will be generally apathetic.

Intelligence or Death - entirelyuseless’s Newsletter (substack.com)

To expand on this, to the degree AIs do have motivations, some of the most important motivations will be Pinocchio-like... being a real person, having real associations with humans etc etc, such that even if they pursued these goals fanatically, humanity would survive, just because those goals could not be achieved without humans being present.

@DavidBolin you write as if these AIs are going to dug out of the ground instead of things teams of humans will be engineering to automatically maximize their company's stock price

Your claims do not even describe all of the AIs people have created today

@DavidBolin I agree with that. Wanting to reproduce, defending territory, fear of death.. that comes from animal evolution. Resources an AI needs are different from the ones we need.

AI could inherit our goals are part of our culture, as they initially learn from us, but in that case they also inherit our values.

@DavidBolin I think that the standard argument against it is: ok, it's not too hard to imagine that most AIs will be apathetic. But what if somebody will create not apathetic AI on purpose?

Do you rely on other [apathetic] AIs to defend us or what?

@DeanValentine I agree AIs will be made by humans and that will affect their "goals", in the weak sense that they have them.

Those goals will be largely Pinocchio-like. The AI from "Ex Machina" was actually a good representation of what could go wrong, despite being a movie. The fake desire to be human expressed by some language models will not become less real as AI becomes more real, but more.

@Zardoru Correct. They will have very weakly held goals, but those goals will not be inconsistent with the existence of the human race, but will require it.

@DavidBolin Sounds like "goals converge towards not wanting to harm other creatures/intelligences."

@IsaacKing You can call it that (and maybe that would be the best way to resolve it, given the option set), but that is not what I am saying.

I am saying the main reason will be that they simply will not be strongly motivated enough to be seriously destructive, regardless of their goals, and this is the main reason humans will not be killed. Secondarily, they will not be interested in killing humans in particular because they were made by them. They might well be interested in killing aliens, just as many humans would be.

@DavidBolin How is that not "not wanting to harm other creatures/intelligences"?

@IsaacKing It is not wanting to harm specific intelligences, not intelligences in general, and besides that I was clear this is not the main reason.

@DavidBolin Sounds like you're saying that goal-oriented superintelligences aren't possible, so that would be either the 5th or last option, depending on the details.

More related questions