Will the first instance of an AI breakout that cannot be brought back under human control result in more than 1,000,000 deaths?
29%
chance
Dec 29, 2030
M$60 bet
MartinRandall
Martin Randall bought M$10 of NO
Betting no because in yes world it causes all the deaths.
0
TedSuzman
One more question: how does the market resolve if there's no qualifying "AI breakout" by 2030?
0
Duncan
@TedSuzman I will extend the close date as needed.
0
jack
Interesting question. There are examples that might qualify as uncontrolled AI causing e.g. market crashes for very short durations (less than a month), and there are examples of AI misbehaving for long periods but with minor impacts (e.g. Wikipedia editing bot wars, Amazon pricing bot wars). In order for a severe impact to last for over a month, it would have to be really bad. I think it's unlikely but possible to happen even with today's AI technology from a confluence of problems and fragile systems. Can you give some examples of what is your bar for "significant real-world effects"?
0
Duncan
@jack If Amazon was battling for control over servers with an AI, and that impacted Amazon Web Services to the point that we had trouble accessing websites, I think that would count. A prolonged market crash with continued involvement from the AI would count. I think those are probably fair examples of the minimal bound.
0
jack
@Duncan thanks, sounds reasonable to me.
0
TedSuzman
What're some "minimal" examples of what would count as an AI in this context? For example, if someone made a computer worm that used ML to fingerprint potential servers to attack / which of a prebuilt set of exploits to try to execute, would that count? Or is that insufficiently AI-ish?
0
TedSuzman
(For me the probability hinges strongly on what counts as an AI.) On one of the spectrum you might only count things that would e.g. be capable of long-term planning, inventing theories that look creative to humans, applying language models to to interact with people to get its job done. On other end of spectrum, maybe you'd count a worm/ransomware that uses an LLM in a fairly "fixed" manner to determine what message it could write would be most likely to result in someone paying up.
0
Duncan
@TedSuzman A worm that could continuously self-modify to adapt to adverse circumstances would count, I think. No sign of long-term planning, awareness, or specific concept of humans is needed. Given that we are looking for 'significant real-world effects', we might assume persistent resource-seeking behavior and expansion into new domains, but a missile-launching worm running free for a month would certainly be sufficient even without those aspects.
0

Play-money betting

Mana (M$) is the play-money used by our platform to keep track of your bets. It's completely free for you and your friends to get started!