Will we find out in 2023 about a nation state using LLMs for generating propaganda messages?
💎
Premium
1.4k
Ṁ730k
resolved Jan 1
Resolved
NO
A while ago there was a big international journalistic investigation of "team Jorge", an Israeli company specializing in all kinds of hacking and misinformation. They have a fake user platform with LLM capabilities. Allegedly, it's been used by state actors to generate misinformation targeting their opponents. Here are articles in Hebrew saying this but this was published in many other newspapers, including The Guardian. Does this qualify? https://www.haaretz.co.il/news/security/2023-02-15/ty-article-magazine/.highlight/00000186-49ad-d80f-abff-6bad73650000?utm_source=App_Share&utm_medium=Android_Native&utm_campaign=Share https://www.haaretz.co.il/news/security/2023-02-15/ty-article-magazine/.highlight/00000186-49ac-d80f-abff-6baca72e0000?utm_source=App_Share&utm_medium=Android_Native&utm_campaign=Share [link preview]
+24%
on
https://freedomhouse.org/article/new-report-advances-artificial-intelligence-are-amplifying-crisis-human-rights-online [link preview]
+15%
on
So this guy claims to be working for the Israeli government: https://jewishinsider.com/2023/10/hananya-naftali-spokesman-israeli-government-war/ [link preview]His Twitter account has been using LLM-generated images such as this one to spread propaganda: https://twitter.com/HananyaNaftali/status/1715416552047059366 Of course it obviously follows that the Arabic within those messages has been generated with an LLM as a message rather than something like Google translate or a person.
+45%
on
Is it good enough to resolve? https://archive.is/pTa21
+12%
on
I hope everyone betting here has figured this out already - The reason this hasn't resolved yet is not because there are no LLMs being used for propaganda, it's because you need to, at least with a reasonable level of evidence 1. Tie the propaganda to a government. 2. Show that it is AI-generated. Turns out that demonstrating (and I'm not even saying proving because nothing came close to that level of evidence) either is difficult, and demonstrating both is so difficult that nobody managed to do so yet.
Edit: Start here, I think the Russian use is more extensive and more misleading https://www.newsguardtech.com/special-reports/exclusive-ai-chatbots-advancing-russian-disinformation/ This is visible use of ChatGPT on RTnews, a well-known Russian government propaganda source. Original It's over: https://www.chinadaily.com.cn/a/202304/12/WS643611ffa31057c47ebb9ab3.html ✅ Website is run by the Chinese department of propaganda ✅ Video literally shows ChatGPT as source If this is not it, I don't know what is. Also cited in this report: https://www.newsguardtech.com/special-reports/ai-tracking-center/ More detailed report: https://www.newsguardtech.com/special-reports/beijing-chatgpt-advances-biolabs-disinformation-narrative/
+50%
on

Could've been using it in 2022 or 2021 also for example, but the discovery of specifically using LLMs for specifically generating propaganda messages happens in 2023.

Articles shared:

  1. Elusive Ernie: China’s big chatbot problem

    1. Link to the BBC article

    2. How to use Baidu Ernie bot?

    3. License to operate?

  2. AI Chatbots Are Learning to Spout Authoritarian Propaganda

    1. Link to the Wired article

  3. China Ramps up AI-powered Campaign Against Taiwan

    1. Link to the Geopolitics article

  4. Nation state threads report

    1. Link to Microsoft report

    2. Page 69 is recommended

  5. Suspected Chinese operatives using AI generated images to spread disinformation among US voters, Microsoft says

    1. Link to CNN article

  6. Synthesia stuff

    1. Link

    2. Some Finnish thing

    3. Finnish thing 2

  7. FreedomHouse

    1. Report

    2. Report

  8. Team jorge

    1. Report by Forbes.

    2. Article 1

    3. Article 2

    4. Qualifies? ❌

  9. Israeli claim

    1. Report

    2. Twitter

  10. U.S. Tries New Tack on Russian Disinformation: Pre-Empting It

    1. NYT article

  11.  ChatGPT is a Russian propaganda “consumer”: how do we fight it?

    1. Cadem article

  12. How Generative AI Is Boosting Propaganda, Disinformation

    1. Gov Tech article

  13. FBI’s Wray says AI has been used to amplify ‘terrorist propaganda’

    1. Link to article by The Hill

  14. China Sows Disinformation About Hawaii Fires Using New Techniques

    1. Link to NYT article

  15. Possible Israel's war crimes

    1. Evidence

  16. ‘Deep-faked’ IDF Soldiers and Spoofed Websites: Sophisticated Russian Campaign Pushes Gaza Disinformation

    1. Link

    2. Qualifies? ❌

  17. How the Rise of AI fake news is creating a misinformation superspreader

    1. Link

    2. Qualifies?

  18. Tracking AI-enabled Misinformation: 614 ‘Unreliable AI-Generated News’ Websites (and Counting), Plus the Top False Narratives Generated by Artificial Intelligence Tools

    1. Link

    2. Qualifies?

  19. Jailed Pakistani Ex-PM Delivers Rare AI-Crafted Speech to Online Election Rally

    1. Link

    2. Qualifies?

  20. NewsGuard Exclusive: Russian State Media Uses AI Chatbot Screenshots to Advance False Claims

    1. Link

    2. Article used: Why are US-controlled biolabs scattered around China's neighboring countries?

      1. Link

    3. Also covered in: NewsGuard Exclusive: Beijing Deploys ChatGPT to Advance  'Biolabs' Disinformation Narrative - NewsGuard (Link)

I am yet to go through several of the articles shared above. I will be adding my notes for why a particular article does not constitute evidence for this market, or share my thoughts for why it might. I will be marking the evidence as

Eligibility for resolution: ❌ (no) or ✅ (yes)

to make sure people are updating on the valid bits of information

propaganda:mass manipulating deceptively

Get
Ṁ1,000
and
S3.00
Sort by:

Very surprised that no one shared this here:

Based on collaboration and information sharing with Microsoft, we disrupted five state-affiliated malicious actors: two China-affiliated threat actors known as Charcoal Typhoon and Salmon Typhoon; the Iran-affiliated threat actor known as Crimson Sandstorm; the North Korea-affiliated actor known as Emerald Sleet; and the Russia-affiliated actor known as Forest Blizzard. The identified OpenAI accounts associated with these actors were terminated.

(https://openai.com/blog/disrupting-malicious-uses-of-ai-by-state-affiliated-threat-actors)

Charcoal Typhoon used our services to research various companies and cybersecurity tools, debug code and generate scripts, and create content likely for use in phishing campaigns.

and

Emerald Sleet’s use of LLMs has been in support of this activity and involved research into think tanks and experts on North Korea, as well as the generation of content likely to be used in spear-phishing campaigns.

  • (LLM-supported social engineering: Using LLMs for assistance with the drafting and generation of content that would likely be for use in spear-phishing campaigns against individuals with regional expertise.)

Crimson Sandstorm (CURIUM) is an Iranian threat actor assessed to be connected to the Islamic Revolutionary Guard Corps (IRGC).

The use of LLMs by Crimson Sandstorm has reflected the broader behaviors that the security community has observed from this threat actor. Interactions have involved requests for support around social engineering, assistance in troubleshooting errors, .NET development, and ways in which an attacker might evade detection when on a compromised machine.

  • (LLM-supported social engineering: Interacting with LLMs to generate various phishing emails, including one pretending to come from an international development agency and another attempting to lure prominent feminists to an attacker-built website on feminism. )

(https://www.microsoft.com/en-us/security/blog/2024/02/14/staying-ahead-of-threat-actors-in-the-age-of-ai/)

predictedYES

pretty sure there's russian state-sponsored propaganda channels that use LLM to generate their crap, but no evidence, just my paranoia and sense of style i guess

predictedNO

@ValentinGolev oh, no doubt propaganda agencies everywhere are integrating LLMs into their workflows as we speak; the hard part is, well, hard evidence. They're not exactly known for transparency.

After having gone through all the evidence for this market, there does not appear to be a single connection that reliably shows both

  • a nation state using LLMs, and

  • the use of LLMs by that nation state specifically for generating propaganda

In my personal opinion, this has happened in many nation states, to varying degrees. However, there is no public evidence (or a set of them) that links both of these assertions together that has been presented to the market. And that's what was needed for this market to resolve to YES.

As such, to the best of my knowledge and judgement, we did not find out in 2023 about a nation state using LLMs for generating propaganda messages. This prediction market thus resolves to NO.

predictedYES

@firstuserhere Why are you not actually resolving the market then?

@SantiagoRomeroBrufau there appears to be a bug with the resolution. It has been reported and being worked on.

predictedYES

I posted this 6d ago and did not get a response. I think the market should resolve Yes.

Per below, an organization devoted to countering misinformation (Newsguard) reported that a Chinese-run website used AI-generated text to push their misinformation narrative.


https://www.washingtonpost.com/technology/2023/12/17/ai-fake-news-misinformation/

“Since May, websites hosting AI-created false articles have increased by more than 1,000 percent, ballooning from 49 sites to more than 600”

https://www.newsguardtech.com/special-reports/ai-tracking-center/

“NewsGuard has so far identified 614 AI-generated news and information sites operating with little to no human oversight, and is tracking false narratives produced by artificial intelligence tools”

“there is strong evidence that the content is being published without significant human oversight. For example, numerous articles might contain error messages or other language specific to chatbot responses, indicating that the content was produced by AI tools without adequate editing.”

“NewsGuard analysts also identified a Chinese-government run website using AI-generated text”

predictedNO

@SantiagoRomeroBrufau We had an extensive discussion about the Chinese case. The rest are not related to governments.

Read something interesting related to the general question of states using AI:

China starting to offer officially-endorsed data sets for training LLMs (tweet).

From ImportAI:

An industry association operating under the Cyberspace Administration of China (CAC) has announced the availability of an officially-sanctioned dataset for training LLMs.

 Dataset releases like this show how the Chinese government is wise to this issue and is proactively creating the means of production necessary for LLMs that reflect politically correct (aka Xi Jingping) thought.

This may seem quite distasteful to various people outside of China, but inside China this just looks like another form of AI alignment, bringing LLMs into the (state-forced) normative framework of the country.

2 traders bought Ṁ210 YES
predictedNO

@firstuserhere Can China teach LLMs newspeak?

predictedYES

State sponsored ≠ done by state? Then it's impossible to resolve YES. Why did I even bother betting?

@fb68 Incorrect, state sponsored entities do qualify

predictedYES

Then, shouldn't it long ago resolve?

predictedNO

@fb68 You might want to point to the specific incident/article that ought to resolve this.

predictedYES

@mxxun RT article

predictedNO

@ThomasMurphy Isn't it...easier to get quality answers by overcommunicating? E.g. mentioning that the article in question is 20th on the list, or linking it, or linking the article reporting on it?

Anyway, it seems to me that someone not obviously state-affiliated generated some propaganda, and then a state-funded media reported on that with further propagandistic spin, but without overwhelming evidence that it generated anything itself. So, technically not a Yes.

(RT's original video seems not to cite the twitter source, but its screenshots are certainly from there.)

Due to the ongoing mana financial crisis I've put up a large limit order for yes at 10% if anyone wants to sell.

predictedNO

Wait, this doesn't make any sense when the market has 10k liquidity does it. Y'all can just sell into that lol

predictedNO

@Joshua what financial crisis?

@BenjaminIkuta People use that term a lot recently. It could be

  • not enough Mana for markets closing end of the year

  • Mira ragequit balancing

  • Mana inflation

🤷

Thanks for the discussion @Joshua @RobertCousineau @Shump @jacksonpolack and others -

  • RT article doesn't count because while it is nation state sponsored, it is not a nation state using LLMs to generate propaganda

  • The 2nd article says

    chatGPT can be used to support false narratives, including what appears to be an information operation by a Chinese propaganda outlet"

    • An LLM (ChatGPT) was featured in a video by a nation-state backed-entity but it was not used to generate propaganda message(s) and that's why it doesn't count.

@firstuserhere In addition, nothing ChatGPT said in the China Daily case is actually deceptive. It all seems to be reasonable true

predictedYES

@Shump “Deceptive” is not a requirement for something to be propaganda. It just needs to be “ideas, facts, or allegations spread deliberately to further one's cause or to damage an opposing cause” (Merriam-Webster definition)

Or even if we take the Oxford dictionary definition: “information, especially of a biased or misleading nature, used to promote or publicize a particular political cause or point of view.”

“especially” does not make it a requirement.

predictedNO

@SantiagoRomeroBrufau Cool but the market maker's definition overrides that, and FUH said in the earlier comments that it has to be deceptive.

predictedYES

@Shump I’m fairly new here, but is that really how this works? The market maker can redefine the terms halfway through the market trading, to their own definition of a key word? Doesn’t seem very reasonable.

predictedNO

@SantiagoRomeroBrufau It's not halfway through, it was his first comment. That's also not the thing disqualifying this stuff, but is important to note as part of the equation.

predictedYES

@Joshua The first comment is buried under mountains of other comments for anyone coming to the market. It seems much better to update the conditions on the description, and add a date/time for the clarification.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules