
Will the US create an antitrust safe harbor for AI safety & security collaboration by 2028?
1
Ṁ70Ṁ102028
41%
chance
1H
6H
1D
1W
1M
ALL
This market will resolve to yes if the US creates a policy by 2028 that establishes a harbor for AI safety and security collaboration. The policy should allow frontier-model developers to collaborate on AI safety and security work without violating antitrust rules.
Luke Muehlhauser from Open Philanthropy suggests this idea in his April 2023 post, "12 tentative ideas for US AI policy." This market idea was proposed by Michael Chen.
This question is managed and resolved by Manifold.
Market context
Get
1,000 to start trading!
People are also trading
Related questions
Will the US establish a clear AI developer liability framework for AI harms by 2028?
39% chance
[ACX 2026] Will the U.S. enact an AI safety federal statute or executive order in 2026?
23% chance
Will the US fund defensive information security R&D for limiting unintended proliferation of dangerous AI models by 2028
43% chance
Will the US require a license to develop frontier AI models by 2028?
49% chance
Will the US implement software export controls for frontier AI models by 2028?
77% chance
Will the US implement AI incident reporting requirements by 2028?
83% chance
By end of 2028, will there be a global AI organization, responsible for AI safety and regulations?
40% chance
Will the US implement information security requirements for frontier AI models by 2028?
88% chance
Will a regulatory body modeled on the FDA regulate AI in the US by the end of 2027?
16% chance
Will the US implement testing and evaluation requirements for frontier AI models by 2028?
82% chance