Will there be a botched AI takeover/attempted shutdown evasion killing >100 people before 2050?
➕
Plus
54
Ṁ2996
2050
29%
chance

Before 2050, will an AI system (or group of systems) kill at least 100 people in the process of trying to either take over from humans or evade being shut down, yet ultimately fail?




Apr 4, 5:21pm: before 2040 → before 2050

Apr 6, 6:56pm: Botched AI takeover/attempted shutdown evasion killing >100 before 2050 → Will there be a botched AI takeover/attempted shutdown evasion killing >100 people before 2050?

Get
Ṁ1,000
and
S3.00
Sort by:

Do the deaths have to be a direct consequence of the botched takeover, or can they be indirect?

If a government decides to bomb datacenters killing >100 people in retaliation to threats made by an AI, would this market resolve YES?

If an AI desperately caused a large-scale power outage in the moments before being shut down, leading to >100 excess deaths, would this market resolve YES?

How will you determine whether it was a "Botched AI takeover/attempted shutdown evasion"
Or something else? does this resolve yes if AI kills 100+ people for a different reason?

@VictorLevoso if AI kills 100+ people for a different reason, such as intentional misuse (e.g., terrorists use AI with the goal of killing people) or capabilties failure (e.g., self-driving cars crash due to being imperfect) then that alone would not be enough to cause the market to resolve positively (though if, for instance, terrorists were explicitly producing an AI that would evade shutdow and then that AI went on to kill 100+ people in this pursuit, the market would resolve positively). If there are instances of AI killing 100+ people and it's very uncertain whether they're for the reasons here or not, then I would resolve the market N/A (but if the general consensus leans one way but it's not totally certain, I'll probably resolve whichever way it leans). If the AI was pursuing courses of action that would naturally lead to takeover, but it wasn't pursuing takeover qua takeover (e.g., the AI was pursuing "maximize influence"), then I would probably resolve positively, but I'd make a bit of a judgment call on this and might resolve negatively or N/A.

Lol, I misread this at first as 100 million dead and was trading on that misreading

I expect botched AI takeovers are less likely than AI just trying to get power in a way that kills less people and failing, or suceding in a way that only gives it mundane levels of power and doesn't actually kill people.

I don't see a good convergent reason why AI are likely to try takeover strategies that kill people unless it's pretty clear they are going to get away whith it , and AI that is dumb enough to try anyway is probably not smart enough to kill 100 people.

Changed ending date to 2050 from 2040. Not planning on making any further changes

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules