Resolves YES if most SOTA models are released within some kind of not-completely-trivial sandbox
Example scenarios:
If most models have direct read/write access to the internet, resolves NO
If DeepMind sandboxes all of their models but Brain and OpenAI don't (assuming those are the three groups releasing SOTA LLMs), resolves NO
Resolves NO if there's general agreement that it should be standard practice but isn't
Resolves YES if companies are sandboxing but the sandboxes have notable security flaws.
Resolves YES if models have some kind of restricted read access to the internet
Oct 4, 12:26pm: Clarification: Resolves YES if models are deployed in sandboxes with some intentional gaps, e.g. a chatbot that has read/write access to some specific communication channel but is otherwise treated as untrusted code.