If an AI system is responsible for the deaths of >= 5000 Americans by eoy 2027, which AI company would be most to blame?
24
Ṁ6.3kṀ2.6k
2027
8%
xAI
20%
OpenAI
10%
Anthropic
12%
Google
2%
DeepSeek
4%
Facebook
17%
Palantir
11%
Anduril
3%
Alibaba
13%
Other

  • Update 2026-01-21 (PST) (AI summary of creator comment): If there are multiple events where an AI system is responsible for the deaths of >= 5000 Americans by end of 2027, the market will resolve to the AI company responsible for the first such event.

  • Update 2026-01-21 (PST) (AI summary of creator comment): If no AI system is responsible for the deaths of >= 5000 Americans by end of 2027, this market will resolve to N/A (not to a "no event" option).

  • Update 2026-01-21 (PST) (AI summary of creator comment): The AI companies (not the AI systems themselves) would be considered responsible for deploying the AI systems involved in any deaths.

  • Update 2026-01-21 (PST) (AI summary of creator comment): The market is not limited to LLMs only. Other AI companies such as Anduril or Palantir are also eligible for resolution.

  • Update 2026-01-21 (PST) (AI summary of creator comment): Responsibility determination involves judgment calls by the creator. Example: If someone independently replaces a nuclear reactor control program with Claude Code and a meltdown happens, Anthropic would likely not be considered responsible. However, if Claude suggests this action and the user proceeds, Anthropic would likely be considered responsible.

Market context
Get
Ṁ1,000
to start trading!
Sort by:
boughtṀ50 YES

@Ryu18 What do you know that I don't XD

@NuñoSempere Nothing, this is just based on Claude Code producing a lot of code haha

Is this market for LLMs only?

Also, ML systems just sit there until you start feeding them input vectors. In what sense would they be "responsible"?

@CraigDemel The AI systems wouldn't be responsible, the companies that deploy them would

@CraigDemel "Is this market for LLMs only?" no, e.g., feel free to e.g. add Anduril or Palantir if you want

@NuñoSempere If I replace a nuclear reactor control program with Claude Code and a meltdown happens, would you consider Anthropic responsible?

If I replace a nuclear reactor control program with Claude Code and a meltdown happens, would you consider Anthropic responsible?

I would make a judgment call. As described probably not. If Claude suggests this to you and you go ahead probably yes.

boughtṀ25 YES

@jack What's your reasoning here?

@NuñoSempere Number of consumer users and track record of safety issues

If there are multiple such events, how does it resolve? E.g. equally or to the first or the biggest or something?

@jack Good question, to the first.

Probably not the most common take, but I much prefer these markets when one of the options is "there is no event" rather than n/a the market if there isn't one.

@EvanDaniel Makes sense! But in this case I care specifically about the relative risk. But happy for you to create another market for the unconditional question and I will link it

opened a Ṁ100 YES at 15% order

I think Google and OpenAI are buys here, because they just legitimately have far more individual users in the US (and likely will continue to do so going forward), and I think the likeliest near-term risk domain is AI mass psychosis, or a mass casualty event by someone coached by an AI, things in that ballpark. And if there's a massive cyberattack or bio-event or something, leading to loss of life, they're not bad bets, regardless.

far more individual users in the US

makes sense

© Manifold Markets, Inc.TermsPrivacy