From the MIT AI Risk Initiative: "Loss of life - Accidental or deliberate loss of life, including suicide, extinction or cessation, due to the use or misuse of a technology system."
This market is about whether a deployed AI system will be blamed for a large loss of life.
Resolution criteria
This market resolves YES if, by 31 December 2035, there is a publicly available:
official report from a government, regulator, court, accident/incident body, or major international organisation (WHO, UN, ICAO, etc.), or
article from a major reputable news organisation (e.g. Reuters, AP, BBC, NYT, WSJ, The Guardian) that credibly reports such a finding,
stating that:
There was one incident or a clearly linked chain of incidents in which ≥1,000 human deaths occurred (accidental or deliberate, including suicide if part of the incident).
The use or misuse of a deployed AI system (e.g. in transport, healthcare, weapons, critical infrastructure, etc.) was a primary cause of those deaths, not just a minor factor.
The primary cause is not attributed solely to “human error” or non-AI equipment failure (though humans may share responsibility).
“Deployed AI system” here means a system actually used in the real world (not just a lab demo).
Causality notes
To count, the AI must be directly involved in the operational chain of events (e.g. controlling vehicles/devices/weapons, or giving specific operational decisions like “safe to proceed” that are followed and cited as a main cause).
Diffuse, long-term effects (e.g. a recommender system allegedly worsening mental health or radicalization in general) do not count unless an investigation clearly ties a specific, countable death toll primarily to that system.
In borderline cases, the market should resolve based on a good-faith judgment by the creator/mods, using the balance of evidence.
The market resolves NO on 1 January 2036 if no such case is documented.
Update 2025-12-03 (PST) (AI summary of creator comment): Cumulative deaths from self-driving cars count: If a fleet of self-driving cars causes deaths that accumulate to ≥1,000 over time (even if statistically safer than human drivers), this would qualify for YES resolution, provided the deaths are blamed on the AI system per the criteria above.
https://www.nytimes.com/2024/07/02/technology/ukraine-war-ai-weapons.html
Seems likely AI weapons will kill more than 1000 people in the next ten years
@cc6 would a large fleet of self-driving cars, that's statistically safer than people but still leads to human death occasionally (bc driving flawlessly is hard), until it adds up to 1000+ deaths, count?
@Bayesian Yes. I personally don't think too many deaths will occur due to self driving cars. Nobody has been killed by a Waymo yet, and they aren't even driving in optimal conditions, as the vast majority of the other cars on the road are still operated by humans.