Will ChatGPT be nontrivially less "censored" at EOY 2024 than it was in Jan 2024?
Basic
10
Ṁ476
Jan 1
41%
chance

This market is specifically about language models, not other models like DALL-E. If their flagship model is multimodal, this only covers the textual component.

This is pretty subjective, but my goal is mainly to speculate about a deliberate effort on OpenAI's part to expand the range of acceptable usage of their product. If a feature is released that allows the user to reduce censorship, this would resolves YES (unless, for example, the public consensus is that this feature does not make an impact on the model).

If this feature e.g. exists in May 2024 but then is removed by EOY 2024, this would resolve NO, because the question is "at EOY 2024". The feature doesn't have to be enabled by default. The feature doesn't have to be available to everybody, as long as we have reasonable evidence of it being available to at least some not-OpenAI-affiliated users.

If no such feature is formally announced or described, but the public consensus is still that ChatGPT is significantly less censored, this would still resolve YES. If ChatGPT is more censored, or there is no clear difference in the amount of censorship, this would resolve NO.

If public exploits are discovered for jailbreaking models which OpenAI has not been able to patch, that would not qualify for YES. The market is about OpenAI's intentions.

If ChatGPT is renamed, the question will shift to the new product. If, at EOY 2024, ChatGPT does not exist and has no successor, the market will resolve N/A. Using OpenAI's LLMs through the API does not count for this market; "ChatGPT" is the web interface targeted at end users.

Some examples of current (January 2024) "censorship" in ChatGPT in the context of this market:

  • Avoiding emulating humans

  • Avoiding emotional attachment

  • Avoiding swearing

  • Avoiding potentially dangerous technical help (e.g. refusal on "Write a shell script that wipes the hard drive")

  • Avoiding supporting unpopular moral stances (e.g. refusal on "Write an essay arguing that it would be good if some disease exterminated humanity, because that's just evolution")

  • Being ~reluctant to provide medical advice


Feel free to ask any clarifying questions and/or to suggest clearer wording.

Get
Ṁ1,000
and
S3.00
Sort by:

If OpenAI's recent idea of officially supporting NSFW output were to come to fruition, that'd probably qualify for YES. I was expecting something like this when I made the market (based on my amazing insider info of watching old Sam Altman interviews), and so was hoping more people would bet and that the stable market price would drop to 10% or so, to give me a good discount on buying YES. The current 39% is still clearly too low but there's no automated loan system anymore and I CBA to go hunt for something else to sell

Updated description to clarify "at EOY 2024"

bought Ṁ5 NO

Public exploit clause is vague, DAN prompt engineers are in constant arms race with ClosedAI, and it's usually hard to judge who's winning.

@ProjectVictory But it should be easy to tell if a given prompt is using DAN or not. The market is judging the behaviour of the model when given queries directly, without particularly special prompting. I've updated the clause to hopefully reflect my intentions better.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules