Resolves according to Metaculus resolution.
Metaculus high-level description:
This question will resolve as Yes if the United States federal government enacts a law or executive order, after December 31, 2025 and before January 1, 2027, which introduces at least one new AI safety requirement on any private entity developing or deploying AI models.
For purposes of this question, an “AI safety requirement” is a requirement intended to prevent material harm from an AI system’s general capabilities or misuse potential, such as mandated risk assessments, safety evaluations, red-teaming, incident reporting, deployment constraints, or access controls. Such requirements count only if they are imposed on the basis of the model’s general capability level (or its classification as a general-purpose model), rather than on the basis of a specific application, audience, or content category. They do not include requirements about privacy, copyright, or federal procurement.