Background
AI systems are increasingly being deployed in critical domains like transportation, healthcare, and infrastructure. While there have been incidents involving AI systems, there has not yet been a single accident directly caused by AI that resulted in mass casualties. Current regulatory frameworks, including the EU's AI Act and similar U.S. initiatives, emphasize safety and risk management in AI deployment.
Resolution Criteria
This market will resolve YES if:
An accident occurs before January 1, 2028
The accident results in 500 or more deaths (immediate or within 6 months of the incident)
AI is determined to be the direct cause of the accident through official investigation reports
The AI system's actions or decisions were the primary factor leading to the deaths, not human error or other external factors
The market will resolve NO if:
No such accident occurs by January 1, 2028
An accident occurs but results in fewer than 500 deaths
An accident occurs but AI is determined to be only a contributing factor rather than the direct cause
The incident is caused primarily by human error, even if AI systems were involved
Considerations
"Directly responsible" means the AI system's autonomous decisions or actions were the primary cause of the accident, not human misuse or override of AI systems
The death count must be verifiable through official sources
Intentional acts (e.g., weaponized AI or terrorist attacks) do not count - this market is specifically about accidents
If multiple connected incidents occur within a 24-hour period due to the same AI system, they will be counted as a single accident
Deaths must be directly attributable to the accident, not secondary effects or long-term consequences