Update 2026-01-21 (PST) (AI summary of creator comment): If there are multiple events where an AI system is responsible for the deaths of >= 5000 Americans by end of 2027, the market will resolve to the AI company responsible for the first such event.
Update 2026-01-21 (PST) (AI summary of creator comment): If no AI system is responsible for the deaths of >= 5000 Americans by end of 2027, this market will resolve to N/A (not to a "no event" option).
Update 2026-01-21 (PST) (AI summary of creator comment): The AI companies (not the AI systems themselves) would be considered responsible for deploying the AI systems involved in any deaths.
Update 2026-01-21 (PST) (AI summary of creator comment): The market is not limited to LLMs only. Other AI companies such as Anduril or Palantir are also eligible for resolution.
Update 2026-01-21 (PST) (AI summary of creator comment): Responsibility determination involves judgment calls by the creator. Example: If someone independently replaces a nuclear reactor control program with Claude Code and a meltdown happens, Anthropic would likely not be considered responsible. However, if Claude suggests this action and the user proceeds, Anthropic would likely be considered responsible.
Readers can also find the unconditional question here: <https://manifold.markets/EvanDaniel/if-an-ai-system-is-causes-the-death>
Update 2026-01-21 (PST) (AI summary of creator comment): Chinese AI labs such as Baidu, Tencent, and Moonshot are eligible for resolution as potential AI companies that could be held responsible.
People are also trading
@CraigDemel "Is this market for LLMs only?" no, e.g., feel free to e.g. add Anduril or Palantir if you want
@NuñoSempere If I replace a nuclear reactor control program with Claude Code and a meltdown happens, would you consider Anthropic responsible?
If there are multiple such events, how does it resolve? E.g. equally or to the first or the biggest or something?
@EvanDaniel Makes sense! But in this case I care specifically about the relative risk. But happy for you to create another market for the unconditional question and I will link it
I think Google and OpenAI are buys here, because they just legitimately have far more individual users in the US (and likely will continue to do so going forward), and I think the likeliest near-term risk domain is AI mass psychosis, or a mass casualty event by someone coached by an AI, things in that ballpark. And if there's a massive cyberattack or bio-event or something, leading to loss of life, they're not bad bets, regardless.