Conditional on a negative consequence of AI that shocks governments into regulating AI occurring, what will it be?
➕
Plus
66
Ṁ2764
2033
29%
Something porn-related
1.9%
An AI not being sufficiently woke
27%
An AI injuring or killing someone by accident
1.1%
An AI injuring or killing someone because it was told to
0%
An AI injuring or killing someone because it decided to
20%
AIs taking jobs
0%
An AI escaping from safety confinement (an "AI box")
13%
AIs attempting to covertly control or influence people or entities
8%
An AI created with malevolent goals, like ChaosGPT, becoming competent
0%
An AI that devotes excessive amounts of resources to its goal, such as manufacturing paperclips
0%
An AI that resists being switched off or destroyed
0%
An AI that rewrites its own top-level goals
0.2%
An AI that makes scientific advancements
0%
An AI that appears friendly but then becomes treacherous and deceptive
0%
An AI that is superintelligent and hence is uncontrollable, renders all jobs obsolete, and likely sees humans as inferior

This market is about the first such shock. There may be many.

The regulation must specifically cover AI, and not just particular undesired behaviours or use cases of AIs. For example, if a new law banned fabricated porn of real people created without their consent, that would not count for the purposes of this market, because such porn can be produced by humans with Photoshop.

However, if a new law covers behaviour that humans are incapable of doing, both directly themselves or indirectly via writing non-AI software to do it, and only AIs are capable of doing it, that would count.

Get
Ṁ1,000
and
S3.00
Sort by:
Something porn-related

Does child porn in particular count here?

Being too woke

Is there a limit on number of answers ? I would like to add one

@Orca I must have configured the question in such a way that new answers cannot be added - even by me. The only thing you could do is clone the market and then add or remove answers, I think.

Will the government of a small country like say Nauru passing such a law count?

@Orca Yes

I actually think the incident will be AI companies being too powerful with monopoly, so government try to stop it.

Can you add more options now?

@HanchiSun That is kind of outside the scope of this question - I am interested in regulations of AIs themselves, rather than antitrust enforcement actions against AI companies.

Is this conditional on it occurring by a specific date?

@PlasmaBallin Not really - the market end date is 10 years in the future, I feel like that should be more than enough time for at least one negative consequence of AI to eventuate.

"AIs attempting to covertly control or influence people or entities"

Does this include being used to manipulate elections?

@MaxHarms Well, if it's done covertly, yes. A Russian bot pretending to be an American? That would be a covert influence operation, yes. Cambridge Analytica buying targeted AI-generated ads on Facebook? Can't see anything covert about that.

@RobinGreen Imagine meta releases a chatbot that "fights misinformation on election fraud" by engaging users in conversation where it pushes them to vote Democrat. If a republican congress freaks out about this and passes regulations, what category (if any) does it fall under?

@MaxHarms Well, yes I guess that would count as a covert operation, in the sense that while it might not be covert from the perspective of the targets of the operation, Meta describing it as solely fighting misinformation on election fraud would be trying to cover up what it was doing, if it was instructed to propagandise for Democrats. It would also count as covert if the AI developed "a mind of its own" and decided to persuade some/all users to vote Democrat of its own volition, without Meta telling it to, and without "getting permission" from Meta to do so.

Desired options:

  • An AI causes a major stock market crash or similar market disaster

  • An AI is involved in a lawsuit where it claims to be a person or otherwise deserving rights

@MaxHarms Re 1, algorithms (or weak AIs) have already caused market crashes, however the markets quickly recovered, usually after some intervention from the stock exchange and/or the relevant government. What makes you think strong AIs would be more dangerous than ordinary algorithms in this regard?

Re 2, I don't think such a lawsuit would get anywhere, although perhaps I'm just displaying presentism bias here. But no, I think for an AI to gain legal personhood (outside of Saudi Arabia) generally that would require a statutory or even constitutional change in most countries of the world.

@RobinGreen I'm not so much saying that I think these options are especially likely as I'm saying that they don't seem so unlikely that they should be excluded from the list. 🙂

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules