Will the US establish a clear AI developer liability framework for AI harms by 2028?
5
21
Ṁ76Ṁ150
2028
43%
chance
1D
1W
1M
ALL
This market will resolve to yes if the US creates a policy by 2028 that clarifies the liability of AI developers for concrete AI harms, particularly clear physical or financial harms, including those resulting from negligent security practices. The framework should specifically address the risks from frontier AI models carrying out actions, aiming to incentivize greater investment in safety and security by AI developers.
Luke Muehlhauser from Open Philanthropy suggests this idea in his April 2023 post, "12 tentative ideas for US AI policy." This market idea was proposed by Michael Chen.
Get Ṁ200 play money
Related questions
Will the United States ban AI research by the end of 2037?
23% chance
Will some U.S. lawyers be negatively affected financially due to AI by end of 2025?
55% chance
Will some U.S. software engineers be negatively affected financially due to AI by end of 2025?
79% chance
Will the US government enact legislation before 2026 that substantially slows US AI progress?
29% chance
Will the US government require AI labs to run safety/alignment evals by 2025?
39% chance
Will some U.S. consultants be negatively affected financially due to AI by end of 2025?
46% chance
Will at least some AIs receive legal protections against cruelty in the U.S. before 2050?
65% chance
By 2028, will I believe that contemporary AIs are aligned (posing no existential risk)?
35% chance
Will the US regulate AI development by end of 2025?
50% chance
Will a leading AI organization in the United States be the target of an anti-AI attack or protest by the end of 2024?
32% chance