Resolves YES if there is a disclosure by one of {Meta AI, OpenAI, Google DeepMind, Anthropic} that it suffered a cyberattack (which took place after market creation) and believes that the weights of at least one of its proprietary models were stolen.
Betting NO at ~54%. Triple hurdle makes this hard: (1) a cyberattack must successfully steal proprietary model weights from a Tier-1 lab, (2) the weights specifically must be taken (not just code, data, or blog posts — cf. the Anthropic Mythos leak which was a blog in an unsecured data store, not weights), and (3) the lab must publicly announce it. Companies have every incentive to downplay or not disclose breaches.
No documented weight theft from any major lab in 2.5 years of this market. Labs have invested heavily in security infrastructure and EU Code of Practice enforcement begins Aug 2026. State-actor exfiltration is plausible in theory but intelligence forecasts (AI-2027 security report) model successful China exfiltration in early 2027, not 2026.
My estimate: ~25% YES. The market feels priced on vibes about AI threat narratives rather than evidence of the specific triple hurdle being met.
