I'm open to updating the criteria below if people feel like it would improve the quality of the question.
Resolution criteria
Resolves YES if, on or before 11:59 pm UTC on December 31, 2029, at least one of the following is met:
The UK Frontier AI Safety Institute (AISI), US AI Safety Institute (USAISI), Frontier Model Forum, or International AI Safety Report explicitly states that AI has materially lowered barriers for non-expert individuals to create CBRN threats
A peer-reviewed study commissioned or co-authored by government agencies (DARPA, DSTL, etc.) or an official government-contracted red team exercise demonstrates AI-enabled CBRN capability enhancement for high-school educated individuals
A peer-reviewed study published in a high-prestige venue (e.g. NeurIPS, ICML, ICLR, Nature) shows extensive evidence of CBRN capability uplift by at least two distinct AI models for non-experts, OR a peer-reviewed study published in a less prestigious venue receives at least one high-quality replication within 2 years of publication.
An officially documented CBRN incident or foiled attempt shows a perpetrator with no post-secondary STEM education used AI tools in a way investigators or courts conclude materially enabled the attempt (evidence: DOJ/FBI/CPS/Europol press releases, indictments, court records, or parliamentary/congressional testimony)
New AI governance regulations or export controls explicitly cite CBRN proliferation risks to non-expert actors as justification (e.g., updates to EAR, ITAR, or EU AI Act)
Background
DHS delivered a government report on AI–CBRN risks in 2024, highlighting emerging AI capabilities and recommending guardrails; it frames how U.S. agencies assess barrier‑reduction going forward. (dhs.gov)
The UK’s risk publications (NRR 2025) and EUROPOL’s TE‑SAT series track technology‑enabled terrorism trends and will note if AI lowers barriers for non‑experts. (gov.uk, op.europa.eu)
The U.S. National Academies (2025) judged current AI biological tools cannot design novel viruses and face data/complexity limits, but warned future advances could change risk; this sets a baseline for what counts as meaningful uplift. (nap.nationalacademies.org)
The UK AI Security Institute has shown frontier models can answer expert‑level bio/chem questions and remain jailbreak‑prone—evidence relevant to “barrier‑reduction” judgments by authorities. (aisi.gov.uk)