
Will the US establish a clear AI developer liability framework for AI harms by 2028?
6
150Ṁ1162028
39%
chance
1D
1W
1M
ALL
This market will resolve to yes if the US creates a policy by 2028 that clarifies the liability of AI developers for concrete AI harms, particularly clear physical or financial harms, including those resulting from negligent security practices. The framework should specifically address the risks from frontier AI models carrying out actions, aiming to incentivize greater investment in safety and security by AI developers.
Luke Muehlhauser from Open Philanthropy suggests this idea in his April 2023 post, "12 tentative ideas for US AI policy." This market idea was proposed by Michael Chen.
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
Related questions
Related questions
Will the US regulate AI development by end of 2025?
34% chance
Will the US create an antitrust safe harbor for AI safety & security collaboration by 2028?
41% chance
Will the US government enact legislation before 2026 that substantially slows US AI progress?
18% chance
Will the US implement AI incident reporting requirements by 2028?
83% chance
Will the US require a license to develop frontier AI models by 2028?
50% chance
Will the US fund defensive information security R&D for limiting unintended proliferation of dangerous AI models by 2028
43% chance
Will a regulatory body modeled on the FDA regulate AI in the US by the end of 2027?
16% chance
Will the US implement testing and evaluation requirements for frontier AI models by 2028?
82% chance
Will the US government require AI labs to run safety/alignment evals by 2025?
20% chance
Will the US enact export controls for some generative AI software before 2026?
80% chance