Will the US implement testing and evaluation requirements for frontier AI models by 2028?
5
30
Ṁ99Ṁ150
2028
82%
chance
1D
1W
1M
ALL
This market will resolve to yes if the US creates a policy by 2028 requiring safety testing and evaluation for frontier AI models, which are defined as those with highly general capabilities (over a certain threshold) or trained with a certain compute budget (e.g. as much compute as $1 billion can buy today). The policy should also mandate independent audits by qualified auditors to assess the safety and performance of these models.
Luke Muehlhauser from Open Philanthropy suggests this idea in his April 2023 post, "12 tentative ideas for US AI policy." This market idea was proposed by Michael Chen.
Get Ṁ200 play money
Related questions
Will the US government require AI labs to run safety/alignment evals by 2025?
39% chance
Will the US restrict transfer of trained AI models before 2026? (Deny ≥100 countries)
39% chance
Will the US government enact legislation before 2026 that substantially slows US AI progress?
29% chance
Will the US require and verify reporting of large AI training runs before 2026?
46% chance
Will the US regulate AI development by end of 2025?
50% chance
Will there be a Frontier AI lab in China before 2026?
75% chance
Will the United States ban AI research by the end of 2037?
23% chance
Will the US government commit to legal restrictions on large AI training runs by January 1st, 2025?
14% chance
Will a large scale, government-backed AI alignment project be funded before 2025?
15% chance