Credible accusations must be made or linked to in the comments to count for resolution. Once a credible accusation is posted, the question will resolve a month later, to give time for other accusations to surface.
Deaths caused through any causal mechanism count, as long as they aren't too butterfly-effect-y. They must be specific deaths traced to the model. Feel free to ask about specific scenarios and I'll answer whether they count.
Rulings:
10-3-2023: If there are multiple listed organizations responsible for training the model (e.g. one organization did pretraining and the other did fine-tuning), it resolves to the organization that spent the most compute directly on training (for current models, this only counts the compute costs of the forward and backward passes).
10-3-2023: The "Facebook" answer should be interpreted to refer to the the whole Meta corporation, the "Meta" answer will not be chosen.
11-7-2023: Since there's a chance that resolution of this question comes down to a judgement call, I won't bet on this market.
So Palantir make AI tools for assisting killers with lining up the shot right, it's the highest voted one.
But the self driving AI will actually be "taking the shot" when it drives into a pedestrian. 100 times will take a long time, car pileups are possible?
Airplanes have two pilots to rectify AI failure.
The campaign to stop killer robots won't stop other nations from making "AI that takes the shot" weapons though. Other looks pretty good, soonest.
Wrote a narrower question about publicly released language models: https://manifold.markets/toms/who-will-publicly-release-the-first
@georgeyw I'll rule that "releasing" means that at least an API must be available to a nontrivial amount of the public, unless someone gives me a good reason to rule another way in the next 24 hours.
I also was interpreting Facebook as covering all of Meta, and so will not resolve to Meta, again unless someone argues convincingly otherwise in the next 24 hours.
@Daniel_MC Good point, that should also count. Would anyone object to "release" mean that the model must be clearly in production (so not counting deaths during training or testing)?
@LoganZoellner Oh good question. I'll go with whoever was responsible for most of the training counts as responsible (so yes to your question, unless the fine-tuning took more compute than training the base model did), unless someone gives me a better idea in the next 24 hours