This question will resolve YES if any of the following are reported:
Some unauthorized actor was able to breach an AI lab's network security.
For example, if an AI lab's model weights are exfiltrated.
A capability improvement that an AI company was shared without authorization
For example, if an engineer is publicly accused of sharing secrets with another company.
A data breach that involves customer data, like ChatGPT bugs in 2023, will not trigger a YES resolution.
This market will resolve NO if, by Jan 1, 2025, there exist no public reports of a significant incident.
This is a near-identical market to Rob Wiblin's 2023 market here.
Important question! I've curated it on https://theaidigest.org/timeline, it'd be nice to see more questions on lab infosec and harms from breaches