Will the US create an antitrust safe harbor for AI safety & security collaboration by 2028?
1
337
Ṁ10Ṁ70
2028
41%
chance
1D
1W
1M
ALL
This market will resolve to yes if the US creates a policy by 2028 that establishes a harbor for AI safety and security collaboration. The policy should allow frontier-model developers to collaborate on AI safety and security work without violating antitrust rules.
Luke Muehlhauser from Open Philanthropy suggests this idea in his April 2023 post, "12 tentative ideas for US AI policy." This market idea was proposed by Michael Chen.
Get Ṁ200 play money
Related questions
Will there be an anti-AI terrorist incident by 2028?
68% chance
Will at least 25 nations collaborate to develop and enforce unified AI development standards internationally by 2035?
75% chance
Will the US government require AI labs to run safety/alignment evals by 2025?
39% chance
By end of 2028, will there be a global AI organization, responsible for AI safety and regulations?
40% chance
Will the US government enact legislation before 2026 that substantially slows US AI progress?
29% chance
Will a leading AI organization in the United States be the target of an anti-AI attack or protest by the end of 2024?
32% chance
Will the United States ban AI research by the end of 2037?
23% chance
Will the US regulate AI development by end of 2025?
50% chance
Will the US government commit to legal restrictions on large AI training runs by January 1st, 2025?
14% chance
Will I (co)write an AI safety research paper by the end of 2024?
49% chance