Will the US implement AI incident reporting requirements by 2028?
11
18
Ṁ307Ṁ230
2028
83%
chance
1D
1W
1M
ALL
This market will resolve to yes if the US establishes by 2028 a policy requiring certain kinds of AI incident reporting, similar to requirements in aviation or data breach reporting. The policy may allow for many incidents to be kept confidential within a regulatory body. The goal is to enable regulators to track specific types of harms and near-misses from AI systems, allowing them to identify dangers and quickly develop mitigation strategies.
Luke Muehlhauser from Open Philanthropy suggests this idea in his April 2023 post, "12 tentative ideas for US AI policy." This market idea was proposed by Michael Chen.
Get Ṁ200 play money
Related questions
Will the US restrict access outside the US to some APIs to generative AI before 2026?
35% chance
Will the US require and verify reporting of large AI training runs before 2026?
46% chance
By 2028, will an AI safety evaluation become a mandatory requirement for autonomous vehicle manufacturers?
73% chance
Will the U.S. create a new federal agency to regulate AI by the end of 2024?
13% chance
Will the US enact export controls for some generative AI software before 2026?
76% chance
Will the US government require AI labs to run safety/alignment evals by 2025?
39% chance
Will the US government commit to legal restrictions on large AI training runs by January 1st, 2025?
13% chance
Will an AI system be reported to have successfully blackmailed someone for >$1000 by EOY 2028?
74% chance
Will there be an anti-AI terrorist incident by 2028?
68% chance