Will the US establish a clear AI developer liability framework for AI harms by 2028?
5
21
150
2028
43%
chance

This market will resolve to yes if the US creates a policy by 2028 that clarifies the liability of AI developers for concrete AI harms, particularly clear physical or financial harms, including those resulting from negligent security practices. The framework should specifically address the risks from frontier AI models carrying out actions, aiming to incentivize greater investment in safety and security by AI developers.


Luke Muehlhauser from Open Philanthropy suggests this idea in his April 2023 post, "12 tentative ideas for US AI policy." This market idea was proposed by Michael Chen.

Get Ṁ200 play money