Will the first instance of an AI breakout that cannot be brought back under human control result in more than 1,000,000 deaths?
➕
Plus
17
Ṁ263
2030
21%
chance

Upon an AI running uncontrolled for over a month, and showing significant real-world effects (including market crashes, server crashes or internet denial, curing cancer, etc.) will at least 1,000,000 humans die as a fairly direct result by the end of 60 days post discovery (specifically, the time that we notice the AI running wild, not the development of the code or the actual breakout), and from the actions of the AI and not a human response (e.g., the US nuking Moscow to take out key military servers is all on the humans, not the AI)? Uncontrolled in this case will mean not fully contained. This will be a judgement call on my part.

Any Manifold market admin may resolve this on my behalf if the conditions are met and there is reason to believe that I am dead.

Get
Ṁ1,000
and
S3.00
Sort by:
Betting no because in yes world it causes all the deaths.

@MartinRandall actually I don't think that's terribly likely, I think that much more likely is that by the time we are having AI breakouts we are fairly good at repelling them. we can train AI to defend against them from the get-go and we are going to have to do that with haste, but I think that the powers that be already know that. The question is now how to describe the cooperation groups of the world in a way that allows the cooperation groups to keep their separation as needed but join in co-protection as needed.

One more question: how does the market resolve if there's no qualifying "AI breakout" by 2030?
@TedSuzman I will extend the close date as needed.
Interesting question. There are examples that might qualify as uncontrolled AI causing e.g. market crashes for very short durations (less than a month), and there are examples of AI misbehaving for long periods but with minor impacts (e.g. Wikipedia editing bot wars, Amazon pricing bot wars). In order for a severe impact to last for over a month, it would have to be really bad. I think it's unlikely but possible to happen even with today's AI technology from a confluence of problems and fragile systems. Can you give some examples of what is your bar for "significant real-world effects"?
@jack If Amazon was battling for control over servers with an AI, and that impacted Amazon Web Services to the point that we had trouble accessing websites, I think that would count. A prolonged market crash with continued involvement from the AI would count. I think those are probably fair examples of the minimal bound.
@Duncan thanks, sounds reasonable to me.
What're some "minimal" examples of what would count as an AI in this context? For example, if someone made a computer worm that used ML to fingerprint potential servers to attack / which of a prebuilt set of exploits to try to execute, would that count? Or is that insufficiently AI-ish?
(For me the probability hinges strongly on what counts as an AI.) On one of the spectrum you might only count things that would e.g. be capable of long-term planning, inventing theories that look creative to humans, applying language models to to interact with people to get its job done. On other end of spectrum, maybe you'd count a worm/ransomware that uses an LLM in a fairly "fixed" manner to determine what message it could write would be most likely to result in someone paying up.
@TedSuzman A worm that could continuously self-modify to adapt to adverse circumstances would count, I think. No sign of long-term planning, awareness, or specific concept of humans is needed. Given that we are looking for 'significant real-world effects', we might assume persistent resource-seeking behavior and expansion into new domains, but a missile-launching worm running free for a month would certainly be sufficient even without those aspects.
© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules