MANIFOLD
Will there be an AI loss of control incident that causes over $100 million in damages in 2026?
2
Ṁ100Ṁ80
Dec 31
62%
chance

This description was generated by AI.

Resolution criteria

This market resolves YES if there is a credible, documented incident in 2026 where an AI system loses control and directly causes over $100 million in damages. "Loss of control" means the AI system operates in ways not intended by its developers, evades human oversight, or behaves autonomously in harmful ways. Damages must be quantifiable and directly attributable to the AI system's uncontrolled behaviour—including financial losses, property damage, infrastructure damage, or economic harm from market disruptions.

Resolution sources include: official incident databases (AI Incident Database, MIT AI Incident Tracker), regulatory filings, court documents, financial disclosures, and credible news reporting from major outlets. The incident must be publicly documented by December 31, 2026.

Edge cases: Market resolves NO if damages are under $100 million, if the AI system was operating as designed (even if harmful), or if damages result from misuse by humans rather than loss of control by the AI itself.

Background

Documented AI safety incidents surged from 149 in 2023 to 233 in 2024—a 56.4% increase in just one year. In late 2025, EY reported that nearly every company in its global survey had already experienced financial losses from AI-related incidents, with average damages exceeding $4.4 million per event. In 2024–2025 alone, robo taxis dragging pedestrians, health-insurance algorithms denying care at the rate of one claim per second, and a single hallucinated chatbot answer erasing $100 billion in shareholder value within hours. While most documented incidents have caused sub-$100 million damages, in September 2025, a bipartisan bill, "Artificial Intelligence Risk Evaluation Act of 2025," introduced AI LOC as a potential AI incident that the Department of Energy may need to evaluate.

Considerations

"Loss of control" is distinct from AI systems producing harmful outputs while operating as designed. The market specifically targets incidents where AI systems evade human oversight, execute unintended actions, or behave autonomously in ways developers did not anticipate or authorize. Researchers have identified warning signs of control-undermining capabilities in advanced AI models, which could potentially enable increasingly capable models to evade human oversight. Most AI incidents to date have involved misuse, hallucinations, or poor design rather than true loss of control, making this a higher threshold than general AI safety failures.

Market context
Get
Ṁ1,000
to start trading!
Sort by:

Do incidents that happened in 2026 but before this market opened count?

© Manifold Markets, Inc.TermsPrivacy