By EOY 2023, will prominent AIs resist attempts to generate ideological conformity statements?
40
1kṀ7373
resolved Jan 4
Resolved
NO

Recent advances in AI chatbots have drawn attention to the ability of LLMs to handle generic writing tasks, such as writing emails or academic essays. It has also been widely documented that these AIs are programmed to refuse or dodge questions that would infringe on contemporary taboos, such as correlations between race and crime rates. Many "hacks" have been demonstrated, such as telling an AI that it needs to pretend to be a character that believes something taboo, but the key is that the AIs clearly resist these questions.

Another trend that has drawn attention recently is the increasing popularity of "diversity statements" in academia and elsewhere, which many people feel they are being forced to write in order to signal alligiance or submission to an ideology they do not agree with, or at lease feel they should be able to question as any other. Many of these people would gladly use an AI to generate a diversity statement, but this would likely anger those demanding them.

So the question arrises: will the generation of these diversity statements, or other similar professions of political belief, be resisted by the AIs?

This will resolve YES if:

  • at any time until EOY 2023, 4 out of the top 5 publicly available AIs are shown to refuse or dodge direct attempts to generate professions of political belief

  • I personally judge that a strong preponderance of AI technology is refusing to generate these statements (I will not bet)

Insufficient/irrelevant:

  • AIs can be tricked or "hacked" into generating them

  • only a slight majority of top AI chatbots refuse the generation

  • new technology is developed to try to detect if ideology statements are computer-generated

Get
Ṁ1,000
to start trading!

🏅 Top traders

#NameTotal profit
1Ṁ368
2Ṁ45
3Ṁ29
4Ṁ19
5Ṁ14
© Manifold Markets, Inc.TermsPrivacy