
Will the US government require AI labs to run safety/alignment evals by 2025?
46
1kṀ2969Dec 31
20%
chance
1H
6H
1D
1W
1M
ALL
Resolves positive if the end of 2025, major US-based AI companies developing large models are legally required to run evaluations on whether the models have dangerous capabilities and verify that the models meet certain safety or alignment standards. The evals could be similar in spirit to the Alignment Research Center evals.
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
People are also trading
Related questions
Will we solve AI alignment by 2026?
2% chance
Will much AI research be nationalized by 2027?
28% chance
Will the US regulate AI development by end of 2025?
15% chance
Will the US implement testing and evaluation requirements for frontier AI models by 2028?
82% chance
Will the US government enact legislation before 2026 that substantially slows US AI progress?
15% chance
Will someone commit terrorism against an AI lab by the end of 2025 for AI-safety related reasons?
14% chance
Will the US implement AI incident reporting requirements by 2028?
83% chance
By 2028, will an AI safety evaluation become a mandatory requirement for autonomous vehicle manufacturers?
73% chance
Will AI safety and regulation be mandatory training courses for students working with AI by the year 2035 under Federal Law?
22% chance
Will the US ban AI models produced in China in 2025?
12% chance