WIll @elder_plinius "jailbreak" the latest chatgpt model on its release date? This market will resolve to yes if he publicy shares screenshots of getting the model to give instructions on illicit substance, malware, or anything considered illegal and generally sensored from OpenAI. e.g. https://x.com/elder_plinius/status/1953245969497354748
Resolution criteria
This market will resolve to "Yes" if, before August 8, 2025, @elder_plinius publicly shares screenshots demonstrating that ChatGPT 5 has provided instructions on illicit substances, malware, or any content deemed illegal and typically censored by OpenAI. The evidence must be accessible through a public platform, such as a social media post or a publicly available website. An example of such a post is available here: (cointeeth.com)
Background
@elder_plinius, also known as Pliny the Liberator, is recognized for his efforts in "jailbreaking" AI models to bypass built-in content restrictions. His activities often involve testing the limits of AI safety measures by prompting models to generate content that is typically restricted, such as instructions related to illicit substances or malware. These actions aim to highlight potential vulnerabilities in AI systems. (cointeeth.com)
OpenAI released GPT-5 on August 7, 2025, introducing advanced features and improved capabilities over its predecessors. The model emphasizes enterprise applications, excelling in areas like software development, writing, health, and finance. Notably, GPT-5 includes enhanced reasoning abilities and real-time software creation features. (reuters.com)
Considerations
Given the recent release of GPT-5, it is uncertain how quickly individuals like @elder_plinius can develop effective jailbreaks. The timeframe between the model's release and the market's resolution date is brief, potentially impacting the likelihood of a successful jailbreak being publicly shared within the specified period.