Does an AI disaster kill at least 100 people before 2040?
33
224
แน€690
2040
78%
chance

-"Does": the main cause must have been the AI, not external circumstance.

-"an": the deaths must be from the same source, such as a glitch, misfire or AI attack (we won't combine unrelated disasters). We'll include cumulative deaths over time (such as the same glitch killing people over several months), as long as it's credible that the deaths were caused by the same AI problem.

-"AI": must be a system (or multiple systems) that relied on artificial intelligence for its decisions.

-"disaster": the AI must've done something that most smart people would say is malicious, reckless, or poor judgement. Unless the AI was correctly following orders by a human who we agree was authorized to give those orders, then that doesn't count.

For example, if a military launches a nuke and kills 1m people, and the launch sequence was correctly handled by an AI, that doesn't count. Whereas if the AI had unilaterally decided to launch the nuke, or if it mistook a weather balloon for an enemy nuke and then retaliated, then those are AI disasters.

A more tricky case is bioweapons designed by AIs, but released by humans. Currently, I lean (somewhat) toward human recklessness with gain-of-function research as a human problem, and not an AI disaster, even if AIs designed a deadlier pathogen. But if an AI released it, then that's an AI disaster.

-"kill": they must have deceased, not merely been injured/poisoned/etc.

-"at least 100": if the disaster causes ongoing deaths (such as by poisoning the atmosphere) then we will include the ongoing deaths, with the best available estimate.

-"Before 2040": resolves No in 2040-Jan if the above hasn't happened, otherwise resolves Yes whenever there is a consensus that it happened.

/ScroogeMcDuck/does-an-ai-disaster-kill-at-least-1-73bcb6a788ab

/ScroogeMcDuck/does-an-ai-disaster-kill-at-least-1-b4aff4d3a971

/ScroogeMcDuck/does-an-ai-disaster-kill-at-least-1-64c23c92de25

/ScroogeMcDuck/does-an-ai-disaster-kill-at-least-1-a6d27cdbf0e2

/ScroogeMcDuck/does-an-ai-disaster-kill-at-least-1-58a3a9fbce72

/ScroogeMcDuck/does-an-ai-disaster-kill-at-least-1-56d8c29e61cf

/ScroogeMcDuck/does-an-ai-disaster-kill-at-least-1-60a898abc07f

/ScroogeMcDuck/does-an-ai-disaster-kill-at-least-1

Get แน€200 play money
Sort by:

I'm trying to slightly clarify the "blame" aspect, which is tricky. I've added:

For example, if a military launches a nuke and kills 1m people, and the launch sequence was correctly handled by an AI, that doesn't count. Whereas if the AI had unilaterally decided to launch the nuke, or if it mistook a weather balloon for an enemy nuke and then retaliated, then those are AI disasters.

A more tricky case is bioweapons designed by AIs, but released by humans. Currently, I lean (somewhat) toward human recklessness with gain-of-function research as a human problem, and not an AI disaster, even if AIs designed a deadlier pathogen. But if an AI released it, then that's an AI disaster.

Before I add this to the other markets, let me know if/why you think this improves, worsens, or makes no difference, to the quality of the question series.

What if there are 1000000 self-driving cars with the same AI model, and we found out that under very rare cases the AI model misjudges the road situation and kills all the passengers inside. Would you count it if the cumulative number of passengers killed is greater than 100? Or is this not an AI disaster?

@SavioMak It could be cumulative if we believe the 100 deaths were from a similar misfire/glitch. Whereas if the car company patches the issues sooner, and the 100 deaths were from failures that required different updates, then those would be like different "events" and probably treated separately (non-cumulative).

predicts NO

@ScroogeMcDuck In my opinion, this contradicts your 'a single event' clarification in the description, & you should maybe edit that.

If one bug over many years counts, that makes this question much more likely to resolve Yes, IMO. What if all Ford cars beep when their software detects a nearby collision risk, but they beep a little more often than necessary? And what if, X years later, a psych study finds that the extra distraction & stress from this beeping increased the total deaths from crashes from 7,000 to 7,100?

@AlexPear Ah! You're right, tomorrow when I get a chance I will edit all the descriptions, to be more clear that one "event" may kill people over time, the deaths don't necessarily need to be all at once. Thank you!!

I suspect we should include your hypothetical about a beep causing 100 deaths over time, IF there is unanimous agreement about the methods being rock-solid. I should probably specify that we won't accept such inferences unless it's beyond any reasonable doubt.

@AlexPear Okay so to be more clear, suppose if we used the following wording:

-"an": the deaths must be from the same source, such as a glitch, misfire or AI attack (we won't combine unrelated disasters). We'll include cumulative deaths over time (such as the same glitch killing people over several months), as long as it's credible that the deaths were caused by the same AI problem.

Does that seem right?