Who will release the first model to be credibly accused of being responsible for at least 100 deaths?
Mini
26
แน€2.7k
10000
22%
Other
21%
Palantir
12%
Mistral AI
11%
Tesla
6%
Lockheed Martin
6%
Boeing
3%
Shield AI
3%
Facebook
3%
Inflection
3%
EluetherAI
3%
Raytheon Technologies
1.6%
Deepmind
1.5%
OpenAI
1.2%
Alibaba Group
1.2%
Waymo
1.1%
Databricks

Credible accusations must be made or linked to in the comments to count for resolution. Once a credible accusation is posted, the question will resolve a month later, to give time for other accusations to surface.

Deaths caused through any causal mechanism count, as long as they aren't too butterfly-effect-y. They must be specific deaths traced to the model. Feel free to ask about specific scenarios and I'll answer whether they count.

Rulings:
10-3-2023: If there are multiple listed organizations responsible for training the model (e.g. one organization did pretraining and the other did fine-tuning), it resolves to the organization that spent the most compute directly on training (for current models, this only counts the compute costs of the forward and backward passes).

10-3-2023: The "Facebook" answer should be interpreted to refer to the the whole Meta corporation, the "Meta" answer will not be chosen.

11-7-2023: Since there's a chance that resolution of this question comes down to a judgement call, I won't bet on this market.

Get แน€600 play money
Sort by:

So Palantir make AI tools for assisting killers with lining up the shot right, it's the highest voted one.

But the self driving AI will actually be "taking the shot" when it drives into a pedestrian. 100 times will take a long time, car pileups are possible?

Airplanes have two pilots to rectify AI failure.

The campaign to stop killer robots won't stop other nations from making "AI that takes the shot" weapons though. Other looks pretty good, soonest.

Would military AI models designed to kill people count?

If a govt makes a model but doesn't publicly release it, does that count?

(Also I made the Meta option and only then noticed that Facebook was already on the list, so I think I've made that unnecessarily confusing, sorry)

@georgeyw I'll rule that "releasing" means that at least an API must be available to a nontrivial amount of the public, unless someone gives me a good reason to rule another way in the next 24 hours.

I also was interpreting Facebook as covering all of Meta, and so will not resolve to Meta, again unless someone argues convincingly otherwise in the next 24 hours.

What if the model can be used by many people in the public but there is no API (eg: Tesla auto pilot)

@Daniel_MC Good point, that should also count. Would anyone object to "release" mean that the model must be clearly in production (so not counting deaths during training or testing)?

If a fine-tuned version of Llama-2 is used in a violent act, this resolves to Meta, right? (not to whoever did the fine-tuning)

@LoganZoellner Oh good question. I'll go with whoever was responsible for most of the training counts as responsible (so yes to your question, unless the fine-tuning took more compute than training the base model did), unless someone gives me a better idea in the next 24 hours