
Will the US require a license to develop frontier AI models by 2028?
21
1kṀ5632028
49%
chance
1H
6H
1D
1W
1M
ALL
This market will resolve to yes if the US creates a policy by 2028 that requires a license to develop frontier AI models, which are defined as those with highly general capabilities (over a certain threshold) or trained with a certain compute budget (e.g. as much compute as $1 billion can buy today). The policy aims to improve government visibility into potentially dangerous AI model development, allow more control over their proliferation, and also make other compute governance policies more feasible.
Luke Muehlhauser from Open Philanthropy suggests this idea in his April 2023 post, "12 tentative ideas for US AI policy." This market idea was proposed by Michael Chen.
This question is managed and resolved by Manifold.
Market context
Get
1,000 to start trading!
People are also trading
Related questions
Will the US implement testing and evaluation requirements for frontier AI models by 2028?
82% chance
Will the US implement information security requirements for frontier AI models by 2028?
88% chance
Will the US implement software export controls for frontier AI models by 2028?
77% chance
By 2030 will regulations or laws make it prohibitively difficult to develop AI models in the United States?
20% chance
Will the US implement AI incident reporting requirements by 2028?
83% chance
Will the U.S. have passed legislation that requires cybersecurity around AI models before 2030?
80% chance
Will the US establish a clear AI developer liability framework for AI harms by 2028?
39% chance
Will the US impose restrictions on training new advanced AIs before 2030?
12% chance
When will a national law be in force in the US imposing safety requirements on the training/deployment of AI models?
Will the US fund defensive information security R&D for limiting unintended proliferation of dangerous AI models by 2028
43% chance