Does an AI disaster kill at least 10,000,000 people before 2040?
Basic
9
Ṁ262
2040
18%
chance

-"Does": the main cause must have been the AI, not external circumstance.

-"an": the deaths must be from the same source, such as a glitch, misfire or AI attack (we won't combine unrelated disasters). We'll include cumulative deaths over time (such as the same glitch killing people over several months), as long as it's credible that the deaths were caused by the same AI problem.

-"AI": must be a system (or multiple systems) that relied on artificial intelligence for its decisions.

-"disaster": the AI must've done something that most smart people would say is malicious, reckless, or poor judgement. Unless the AI was correctly following orders by a human who we agree was authorized to give those orders, then that doesn't count.

-"kill": they must have deceased, not merely been injured/poisoned/etc.

-"at least 10,000,000": if the disaster causes ongoing deaths (such as by poisoning the atmosphere) then we will include the ongoing deaths, with the best available estimate.

-"Before 2040": resolves No in 2040-Jan if the above hasn't happened, otherwise resolves Yes whenever there is a consensus that it happened.

/ScroogeMcDuck/does-an-ai-disaster-kill-at-least-1-73bcb6a788ab

/ScroogeMcDuck/does-an-ai-disaster-kill-at-least-1-b4aff4d3a971

/ScroogeMcDuck/does-an-ai-disaster-kill-at-least-1-64c23c92de25

/ScroogeMcDuck/does-an-ai-disaster-kill-at-least-1-a6d27cdbf0e2

/ScroogeMcDuck/does-an-ai-disaster-kill-at-least-1-58a3a9fbce72

/ScroogeMcDuck/does-an-ai-disaster-kill-at-least-1-56d8c29e61cf

/ScroogeMcDuck/does-an-ai-disaster-kill-at-least-1-60a898abc07f

/ScroogeMcDuck/does-an-ai-disaster-kill-at-least-1

Get
Ṁ1,000
and
S1.00
Sort by:

If WIV uses AI to design its next bioweapon, and that starts another pandemic like covid-19 killing >10M people, does that count?

@JonathanRay Great question, I haven't actually decided this yet. Will partly depend on what we find out about the event. Some dimensions I'm considering:

  1. An increase in how easily we can point to a causal blunder by the human workers/managers -> a decrease in how much I'd count it as an AI disaster.

  2. A decrease in a lab's general track record of safety -> a decrease in how much I'd attribute it to the AI.

  3. An increase in how much we think the infectiousness and virulence are "explained by" the AI's contribution to its development -> an increase in counting it.

I know this doesn't answer the question, it's a hard one! Open to suggestions.

These markets should replace the tired "AI wipe out humanity" markets. Thanks for creating them!

@CarsonGale Thank you for the kind words!