Conditional on a negative consequence of AI that shocks governments into regulating AI occurring, what will it be?
75
2.8kṀ3161
2033
23%
Something porn-related
0.8%
An AI not being sufficiently woke
24%
An AI injuring or killing someone by accident
4%
An AI injuring or killing someone because it was told to
1.3%
An AI injuring or killing someone because it decided to
24%
AIs taking jobs
1%
An AI escaping from safety confinement (an "AI box")
6%
AIs attempting to covertly control or influence people or entities
0.5%
An AI created with malevolent goals, like ChaosGPT, becoming competent
0%
An AI that devotes excessive amounts of resources to its goal, such as manufacturing paperclips
0.3%
An AI that resists being switched off or destroyed
0%
An AI that rewrites its own top-level goals
0.3%
An AI that makes scientific advancements
15%
An AI that appears friendly but then becomes treacherous and deceptive
0%
An AI that is superintelligent and hence is uncontrollable, renders all jobs obsolete, and likely sees humans as inferior

This market is about the first such shock. There may be many.

The regulation must specifically cover AI, and not just particular undesired behaviours or use cases of AIs. For example, if a new law banned fabricated porn of real people created without their consent, that would not count for the purposes of this market, because such porn can be produced by humans with Photoshop.

However, if a new law covers behaviour that humans are incapable of doing, both directly themselves or indirectly via writing non-AI software to do it, and only AIs are capable of doing it, that would count.

Get
Ṁ1,000
to start trading!
© Manifold Markets, Inc.TermsPrivacy