What AI regulations will the US Congress pass into law before January 3, 2027? [ADD ANSWERS]
17
152
825
2027
71%
Restrictions on deepfake audio, images, or videos of real people
71%
Increased liability for harms caused by AI technology
66%
A classification system of AI applications into at least two different risk categories, with different non-empty regulations on each
61%
Restrictions on pornographic deepfakes
32%
Restrictions on training models on copyrighted material
26%
Restrictions on AI models above a certain size or amount of compute

If an option is already federal law due to a bill passed by Congress before this market was created, it will resolve N/A. Options created after the relevant law has already passed will also resolve N/A. To resolve yes, a regulation must be in the text of a bill that passes both houses of Congress and is not successfully vetoed by the president.

Get Ṁ200 play money
Sort by:
A classification system of AI applications into at least two different risk categories, with different non-empty regulations on each

Does it count as a YES for this question if any bill is passed that classifies AI applications into different categories and regulates those differently?

@josh Yes, if someone slips this into a government funding bill (somehow) it still resolves YES

@SaviorofPlant That's not the corner case I'm thinking of. It sounds like any bill regulating AI will count if it adds any criteria determining whether the regulation applies or not.

@josh Oh, I see. I don't think that would satisfy the "risk categories" part of the answer. I had in mind something like the EU AI Act (https://en.wikipedia.org/wiki/Artificial_Intelligence_Act), which explicitly identifies risk categories. The exact "risk categories" phrasing doesn't need to be used, but the classifications should be based on risk and not solely on something like modality.

@SaviorofPlant You might consider tweaking the title of that one to "at least two different risk categories with different non-empty regulations on each". That would rule out risk categorizations like "not an AI" or "not an AI being applied in a way that triggers this regulation".

More related questions