Credible accusations must be made or linked to in the comments to count for resolution. Once a credible accusation is posted, the question will resolve a month later, to give time for other accusations to surface.
For a language model to count as publically released, either its weights and code must be published, or a general API must be available to the public (the API still counts if it's behind a paywall; "general API" is meant to exclude cases where acceptable prompts are limited to a small number of commands.
Deaths caused through any causal mechanism traced through the public release count, as long as they aren't too butterfly-effect-y. They must be specific deaths traced to the model. Feel free to ask about specific scenarios and I'll answer whether they count.
This a variant of a previous question, where this one is meant to exclude cases like self-driving cars or AI piloted drones.
To be clear, if the model was a model of the human brain and that same interconnected brainlike model ended up killing 100 humans with its robot body, that wouldn’t count because it wasn’t in a process of using language? Or would it count because those same weights used in a different context would be able to hold a conversation and all that?
@TheBayesian The deciding factor is whether the direct output of the model is language. If the model gives a language output, which is then interpreted as robotic commands, that would count. If the model can output natural language and can also directly output the controls for a robot, then it would not count.