Credible accusations must be made or linked to in the comments to count for resolution. Once a credible accusation is posted, the question will resolve a month later, to give time for other accusations to surface.
For a language model to count as publically released, either its weights and code must be published, or a general API must be available to the public (the API still counts if it's behind a paywall; "general API" is meant to exclude cases where acceptable prompts are limited to a small number of commands.
Deaths caused through any causal mechanism traced through the public release count, as long as they aren't too butterfly-effect-y. They must be specific deaths traced to the model. Feel free to ask about specific scenarios and I'll answer whether they count.
This a variant of a previous question, where this one is meant to exclude cases like self-driving cars or AI piloted drones.