Will AI take over the world and keep humans as pets before the year 2100

"Take over the world" means either one or more AIs have official sovereignty over most of the world, or that they effectively control important decisions. Humans may still have a voice on symbolic and minor subjets.

"keep humans as pets" means AI is generaly benevolent to humanity, not because they are designed to or needs it, but because either they think humans are cute and funny, either they consider unsavory to kill them all or let them starve.

This market will resolve to no if AI has wiped out humanity by the year 2100. If AI has wiped out humanity, the market resolution is on them.

This market will resolve to no if AI is still under control of humans.

Get Ṁ600 play money
Sort by:

A fascinating plot prompt—

bought Ṁ250 of NO

This would be a huge success, so buying some NO as insurance.

predicts YES


If we train AI to be "helpful and honest" it has a tendency to seek power to be more helpful.

@Zardoru If we train the AI to be helpful and it keeps us alive for that reason, but we never explicitly trained it not to kill us, that's probably a dystopia where it keeps giving us horrible problems so it can help us solve them.

That said, does that resolve the market yes or no?

predicts YES

@MartinRandall You mean like organizing some sort of squid games ? As long as it let survive enough population to keep humanity going on, it would resolve to YES. It definitely doesn't look like a good outcome. That said, when looking the current state of the world would that be much worse ?

@Zardoru Everything is relative, isn't it?

I've even read some people say that they would want an AI sovereign to not solve their problems so they still have meaningful challenges.

I think I'd rather be a pet.

How do you plan to determine the AI's reasons for keeping humanity alive?

predicts YES

@IsaacKing Reasons for keeping humanity alive are not a decisive factor (I gave two widely aparts for illustration purpose). What matters is the AI have the choice to fight humanity or ignore it, and decide to be friendly instead.

I (or my appointed successor) would resolve yes if it is clear that AI are free, rule the world and actively helping humans. If it is not clear, resolution is no. (Purely philosophical question about free will is out of scope)

The word "pet" is to be understood in a wide acceptance (it's mostly for the clickbait). As a science-fiction reference, in the Culture series by Iain Banks, one could consider the human inhabitants of spaceships or orbitals as pets.

predicts NO

@Zardoru Hold on, that's not at all what you said in the description:

"keep humans as pets" means AI is generaly benevolent to humanity, not because they are designed to or needs it, but because either they think humans are cute and funny, either they consider unsavory to kill them all or let them starve.

The most likely scenario in which AI leaves us alive is that it was designed to have our best interests at heart, so that makes a huge difference in how people will bet.

sold Ṁ106 of NO

@IsaacKing I suppose the contrast could be between an AI that is "hard-coded" to directly try to keep humans around, vs one that is designed to find humans cute and therefore keep humans as an intended side effect of that.

predicts YES

@IsaacKing Free AI, means they are not "Hard coded" to serve us, as I intended with "Not because they are designed to". Asimov three laws of robotics, this is not free AI. An "Aligned" AI is not free either.

However a free AI can still be influenced by human culture from its training.

predicts YES

The scenario behind this market :

1 - AI become significantly more intelligent from humans.

2 - AI proves to be so effective humans tend to rely on it more and more.

3 - Controlling AI proves to be difficult and ultimately fail.

4 - Vastly superior AIs don't consider humanity as a threat. They are not competing for resources as there is plenty in the solar system and they see value of keeping humans around, for example as a natural phenomenon to be protected.

@Zardoru I think "free" is a hard thing to define for designed intelligences. Suppose:
1. One or more of the AIs were designed, in part, to value natural phenomena.
2. This was not done with the intention of preventing human extinction, causing AIs to keep humans as pets, etc.
3. Partly as a result, AIs do not cause human extinction.

Does this resolve the market YES or NO?

predicts NO

@Zardoru So is the delineator that it was an accident? We didn't try very hard at all to make the AI keep us alive, but it turned out to do so anyway for reasons that surprised us?

Or is maybe that it could change its mind at any point?

bought Ṁ5 of YES

@MartinRandall I agree there are cases not so clear. In the scenario you give, valuing natural phenomena is a very broad objective, in this case I still consider AI is free. So resolution would be "yes". Here it choose to protect humanity as a specie, it could have decided to eradicate it to protect biodiversity instead.

predicts YES

@IsaacKing It's not by accident. Why should we consider that the default normal behavior of an intelligent entity (here AI, be it could be an alien civ) is to want to destroy humanity ? Cooperation is also an option.

@Zardoru The intelligent entities I'm most familiar with are humans, who appear to destroy other species by default.