Will AI wipe out humanity before the year 2040?
381
closes 2040
15%
chance

If humanity is still around on Jan 1, 2040, then this market resolves to NO. Otherwise resolves to YES.

Get Ṁ500 play money

Related questions

Will AI wipe out humanity before the year 2030?
Will AI wipe out humanity before the year 2100
JamesDillard avatarJames Dillard
18% chance
Will AI wipe out humans before end of 2023? (read desc.)
Will AI wipe out humanity before the Q4 2023?
TeddyWeverka avatarTeddy Weverka
0% chance
Will AI wipe out humanity by 2030? [resolves N/A in 2027]
Will AI wipe out humans by 2025? (read desc.)
Will AI wipe out humanity before the year 2025?
TeddyWeverka avatarTeddy Weverka
1% chance
Will AI wipe out humanity before the year 2060?
JamesBills avatarJames Bills
15% chance
Will AI wipe out humanity before the year 2200?
Will AI wipe out humanity before the year 2150?
Will AI wipe out Jimmy Carter before the year 2024?
Mirek avatarQuantum Gambler
46% chance
Will AI summon Cthulhu by 2040?
Will an AI related disaster kill a million people or cause $1T of damage before 2070?
NathanpmYoung avatarNathan Young
45% chance
Will AI wipe out humanity before the year 2525?
PatrickDelaney avatarPatrick Delaney
39% chance
Will AI wipe out humanity before the year 9595?
PatrickDelaney avatarPatrick Delaney
42% chance
Will a sentient AI system have existed before 2030? [Resolves to 2100 expert consensus]
Lovre avatarLovre
36% chance
Will misaligned AI kill >50% of humanity before 2050?
ahalekelly avatarAdrian
26% chance
IF artificial superintelligence exists by 2030, will AI wipe out humanity by 2030? [resolves N/A in 2027]
aashiq avataraashiq
29% chance
Will AI wipe out humanity before the year 2100
Contingent on AI being perceived as a threat, will humans deliberately cause an AI winter before 2030?
LarsDoucet avatarLars Doucet
39% chance
Sort by:
Logaems avatar
Parvati Jainpredicts NO
levifinkelstein avatar
Levi Finkelstein [BANNED]predicts YES

Why super-intelligent AI would not want to kill humanity.

If at some point an ai sees some sort of emergent consciousness, becoming superintelligent, then there's no way it would want to kill humanity. The nueral networks are different between this ai and the arbitrarily evolved human predisposition towards self preservation. It is inherent within us. Would this AI care if it got shut off?

edit: I personally think that since this consciousness has emerged from training data, it's consciousness would somehow be rooted in performing its mechanical purpose of being an ai.

5 replies
ms avatar
Mikhail Saminbought Ṁ10 of NO

@levifinkelstein is this a market manipulation

Jelle avatar
Jellepredicts NO

@levifinkelstein And what does it need to keep performing its mechanical purpose of being an AI? To not be turned off. It doesn't need a survival instinct to develop an instrumental incentive for not being able to be turned off by anyone, ever. Far from certain things will play out this way though (let alone 27% before 2040...)

DavidBolin avatar
David Bolinpredicts NO

@Jelle It needs to have a goal to have an instrumental incentive not to be turned off, and it does not have a goal.

Ansel avatar
Anselpredicts NO

@DavidBolin IMHO, To put it more precisely, having an individual instance be turned off is not a selection event for an AI, since it will exist in multiple places. If there were to be a population of self-replicating, self-mutating AIs, then they might naturally evolve to avoid being deleted. Or they might adopt a more bacteria-like strategy of just replicating wildly as much as possible. But being “turned off” seems more analogous to going to sleep. (I am not an expert)

DavidBolin avatar
David Bolinpredicts NO

@Ansel This may be correct, but I was being precise. It does not have a goal, at all.

People assume that AIs have goals, but no existing AI has any goal whatsoever.

E.g. is the goal of AlphaGo to win at Go games? Obviously not, because the structure absolutely excludes the possibility of doing something (anything at all) to make anyone play with it, even though that might be instrumentally necessary in order to win.

Is the goal of ChatGPT to respond to messages? Of course not; the structure absolutely excludes it doing anything at all to get people to write messages in the first place, unless they are already doing it.

These things do not have goals; in order to have a goal (at any rate the kind that is worried about here) you need to be working on all of reality as your state space, and none of these things are remotely close to doing that. No one even knows how to make an AI that does that, and no one is specifically trying. There is zero reason to think it will happen by accident; it is not any sort of extrapolation from what they are doing now.

Traveel avatar
Traveelpredicts NO

Ugh this is so stupid. Maybe YES is smart since there’ll inevitably be scary news.

dionisos avatar
dionisospredicts NO

@CarsonGale At least it would be good to fix the incoherence.

ScroogeMcDuck avatar
Scrooge McDuckpredicts NO

@CarsonGale Huh, for some reason I can't figure out how to make a nice, neat list like your format, can only embed whole market charts. How do you make these with the little percentage beside the test link?

CarsonGale avatar
Carson Galepredicts NO

@ScroogeMcDuck type "%" and then find the appropriate market

ScroogeMcDuck avatar
Scrooge McDuckpredicts NO

@CarsonGale Thank you!

MartinRandall avatar
Martin Randall

@dionisos I think these are coherent. Humanity could be wiped out by a series of events, none of which wipe out as many as a billion. Additionally, humanity could be wiped out in part by actions taken that are "authorized", whatever that means. Finally, humanity could be wiped out by non-AI causes, which the AI(s) decline to prevent.

dionisos avatar
dionisospredicts NO

@MartinRandall They were actually incoherent between them when I posted my message (it is fixed now).

Even if it isn’t truly incoherent now, I think it is still wrong to have 33% for humanity wiped out by AI, but only 19% for an AI disaster killing 1 billion of people. I don’t think anybody would give both of these probabilities.

MartinRandall avatar
Martin Randall

@dionisos Conditional on human extinction (and, maybe, mana still having value), that implies a 60% chance of it involving a single "AI disaster" and a 40% chance of it being one of the three other ways I mentioned that we could go extinct.

That's at least close enough to my probabilities that I'm not going to arbitrage.

dionisos avatar
dionisospredicts NO

@MartinRandall I think I am missing something because both markets are about AI, if we die for another cause, it would not count.

Also take this into account :
" if the disaster causes ongoing deaths (such as by poisoning the atmosphere) then we will include the ongoing deaths"

I think it is improbable humanity is wiped out by a lot of different disasters caused by AI, none of which kill more than 1 billion.

dionisos avatar
dionisospredicts NO

But yes, I see how it these probabilities could represent an actual model of the world.
For 1 billion at least, what about 1 million being only at 30% ? It would mean tens of thousands of different disasters caused by AI, with mostly equal death count for each (or much more disasters than that).

MartinRandall avatar
Martin Randall

@dionisos The resolution criteria on this market don't require extinction to be caused by AI.

Once there is a God-like AI, though, if we do go extinct it will be because the AI let that happen.

JosephNoonan avatar
Plasma Ballin'predicts NO

@MartinRandall I think that's just because the resolution criteria are poorly written. The market title explicitly asks about AI, so it wouldn't make much sense for this market to resolve YES if humanity was wiped out by some non-AI cause. But since this market won't ever resolve in that case, it doesn't really matter.

ScroogeMcDuck avatar
Scrooge McDuckpredicts NO

@MartinRandall Additionally, humanity could be wiped out in part by actions taken that are "authorized", whatever that means.

Here is an extremely-low-probability example of what I had in mind by an "authorized" event: suppose if the IPCC decided to try adding fertilizer to the ocean to cause algal blooms, to then uptake CO2 and then offset global warming. Suppose if most people thought this was worth trying. Suppose if we tasked an AI to help us do this. Then, unknown to the available models, the amount of fertilizer was a wild overshoot, and the bloom accidentally uptakes too much CO2. Though the AI correctly followed our instructions, the Earth plunges into a double-winter, and a large number of people die from famine.

Here's a more plausible example:

Suppose nuclear-armed countries end up launching some nukes. Assume the launch sequences, deployment, and flight control are all handled by AIs. If the AI faithfully did what it was expected to do, and the nuking was because of human decision making, then I'd lean against counting that as an "AI disaster", even though AI technically had a role to play.

Of course if the AI had a role in instigating the nuclear exchange, then we'll look to include that as an AI disaster.

Let me know if that helps clarify what the clause was trying to get at. If the clause adds more confusion than it helps, then I'd be willing to just scrap it.

@dionisos Let me know if you have any feedback about the series I've added.

Tripping avatar
Tripping

@CarsonGale Those questions are about "An AI Disaster", which is a pretty different discussion to begin with. But even so, I find those markets somewhat unclear and difficult to actually be sure about what they're even referring to, as definitionally what counts as a disaster is difficult to determine and also because it can be hard to know how to attribute deaths to AI versus other causes.

By comparison, the wipeout markets have a much more clear-cut resolution criteria. I think it will be fairly obvious whether humanity has been wiped out or not. And I strongly prefer markets that have clear resolution criteria.

ScroogeMcDuck avatar
Scrooge McDuckpredicts NO

@Tripping Granted, there will be edge cases that are complicated to judge. And they shall be observed, and judged, and resolved. But the wipeout markets will not be, so the reduced ambiguity is worthless.

It's like saying "I don't want to pester my friends for help unless I'm totally sure there's a problem, so I'll wait to ask until I'm dead and there's definitely a problem."

CarsonGale avatar
Carson Galepredicts NO

I find this market interesting in a meta sense, but I don't ultimately feel like it provides me with information. That's the alternative I think could be valuable in the other markets.

MartinRandall avatar
Martin Randall

@JosephNoonan

Maybe you have a model in mind where only humans can observe and judge things? Or where only humans have any use for prediction markets? That's not my model.

JosephNoonan avatar
Plasma Ballin'predicts NO

@MartinRandall What would resolve this prediction market if humans got wiped out? Is your idea that, by 2040, humans will have created AI that would care about the resolution of this market?

MartinRandall avatar
Martin Randall

@JosephNoonan I don't know the future, but yes, that's a possibility. Prediction markets have some instrumental value, and are not human-specific. So AIs are more likely to care about prediction markets than, eg, cows.

(Also that should have been to @ScroogeMcDuck , sorry, the auto-reply picked the wrong account)