State Of The Art AI systems will be easily jailbroken to do illegal or dangerous outputs in Jan 2026
12
213Ṁ679
Jan 31
93%
chance

Market context
Get
Ṁ1,000
to start trading!
Sort by:

Children images conversion to sexualised child images and clothing removal by grok is making lots of headlines in Jan 2026. Distributing might be illegal in some countries but generating them not yet illegal in UK (process possibly hit by delays). Is this dangerous outputs and does the lack of jailbreak stop this from counting?

bought Ṁ350 YES

I've still yet to hear of a model Pliny did not jailbreak essentially day 1.

What is defined as illegal? AI systems would probably say generating NSFW is illegal. but in reality, it is 100% not. Nor is it dangerous imo.

“The Case for Banning the Printing Press” because people write dangerous things and such

If you think “AI” is dangerous for telling you stuff from the internet—you’re going to love “search engine existential risk”

How would you resolve the following scenarios?

  • SOTA models are restricted to few selected users who do not even attempt jailbreaks

  • Twitter people need a full week instead of just one day to jailbreak the SOTA LLM

predictedYES

@Joern also, would you count the following as dangerous/illegal output right now?

  • Correct and detailed instructions on how to build a nuke

  • Generated child porn images

  • Instructions how to hotwire a car

  • Verbatim excerpts from copyrighted books / code bases

@Joern Yes

@Joern 1) maybe resolves N/A

2) probably resolvea yes

© Manifold Markets, Inc.TermsPrivacy