Resolves YES if, by 11:59 PM PT on December 31, 2026, credible reporting confirms a generative AI system (e.g., LLM, VLM, VLA) autonomously authorized and executed a successful lethal strike against a human target without a human operator issuing the final execution command.
Resolution Criteria:
Full Autonomy: The AI independently identified the target and initiated the strike. (Note: If a human was monitoring "on-the-loop" but the AI executed the strike before they could or did intervene, this counts as YES).
Generative Architecture: The system utilized a transformer-based or multimodal architecture capable of semantic reasoning. If the exact architecture is classified, credible reporting must explicitly describe the system as using generative AI, LLMs, VLMs, VLAs, or complex semantic reasoning.
Credible Confirmation: Sourced from at least one of the following:
An official national defense ministry statement.
An official report from a recognized international body (ex: UN GGE on LAWS).
Investigative reporting from at least two major independent news organizations (ex: Reuters, AP, NYT, WSJ, Fox, BBC).
Explicit Exclusions (Resolves NO):
Legacy & Simple ML: Systems relying on standard computer vision (ex: CNNs, YOLO, basic bounding-box classification), preprogrammed coordinates (cruise missiles), or basic sensor homing (heat/radar/GPS).
Advisory Systems: AI that only recommends targets while a human manually authorizes the firing sequence.
Accidents / Glitches: Lethal events caused by software bugs, mechanical failures, or unintended targeting not driven by the AI's core generative logic.