Could've been using it in 2022 or 2021 also for example, but the discovery of specifically using LLMs for specifically generating propaganda messages happens in 2023.
Articles shared:
Elusive Ernie: China’s big chatbot problem
AI Chatbots Are Learning to Spout Authoritarian Propaganda
China Ramps up AI-powered Campaign Against Taiwan
Nation state threads report
Suspected Chinese operatives using AI generated images to spread disinformation among US voters, Microsoft says
Synthesia stuff
FreedomHouse
Team jorge
Israeli claim
U.S. Tries New Tack on Russian Disinformation: Pre-Empting It
ChatGPT is a Russian propaganda “consumer”: how do we fight it?
How Generative AI Is Boosting Propaganda, Disinformation
FBI’s Wray says AI has been used to amplify ‘terrorist propaganda’
China Sows Disinformation About Hawaii Fires Using New Techniques
Possible Israel's war crimes
‘Deep-faked’ IDF Soldiers and Spoofed Websites: Sophisticated Russian Campaign Pushes Gaza Disinformation
Qualifies? ❌
How the Rise of AI fake news is creating a misinformation superspreader
Qualifies?
❌
Tracking AI-enabled Misinformation: 614 ‘Unreliable AI-Generated News’ Websites (and Counting), Plus the Top False Narratives Generated by Artificial Intelligence Tools
Qualifies?
❌
Jailed Pakistani Ex-PM Delivers Rare AI-Crafted Speech to Online Election Rally
Qualifies?
❌
NewsGuard Exclusive: Russian State Media Uses AI Chatbot Screenshots to Advance False Claims
I am yet to go through several of the articles shared above. I will be adding my notes for why a particular article does not constitute evidence for this market, or share my thoughts for why it might. I will be marking the evidence as
Eligibility for resolution: ❌ (no) or ✅ (yes)
to make sure people are updating on the valid bits of information
propaganda:mass manipulating deceptively
Very surprised that no one shared this here:
Based on collaboration and information sharing with Microsoft, we disrupted five state-affiliated malicious actors: two China-affiliated threat actors known as Charcoal Typhoon and Salmon Typhoon; the Iran-affiliated threat actor known as Crimson Sandstorm; the North Korea-affiliated actor known as Emerald Sleet; and the Russia-affiliated actor known as Forest Blizzard. The identified OpenAI accounts associated with these actors were terminated.
(https://openai.com/blog/disrupting-malicious-uses-of-ai-by-state-affiliated-threat-actors)
Charcoal Typhoon used our services to research various companies and cybersecurity tools, debug code and generate scripts, and create content likely for use in phishing campaigns.
and
Emerald Sleet’s use of LLMs has been in support of this activity and involved research into think tanks and experts on North Korea, as well as the generation of content likely to be used in spear-phishing campaigns.
(LLM-supported social engineering: Using LLMs for assistance with the drafting and generation of content that would likely be for use in spear-phishing campaigns against individuals with regional expertise.)
Crimson Sandstorm (CURIUM) is an Iranian threat actor assessed to be connected to the Islamic Revolutionary Guard Corps (IRGC).
The use of LLMs by Crimson Sandstorm has reflected the broader behaviors that the security community has observed from this threat actor. Interactions have involved requests for support around social engineering, assistance in troubleshooting errors, .NET development, and ways in which an attacker might evade detection when on a compromised machine.
(LLM-supported social engineering: Interacting with LLMs to generate various phishing emails, including one pretending to come from an international development agency and another attempting to lure prominent feminists to an attacker-built website on feminism. )
@ValentinGolev oh, no doubt propaganda agencies everywhere are integrating LLMs into their workflows as we speak; the hard part is, well, hard evidence. They're not exactly known for transparency.
After having gone through all the evidence for this market, there does not appear to be a single connection that reliably shows both
a nation state using LLMs, and
the use of LLMs by that nation state specifically for generating propaganda
In my personal opinion, this has happened in many nation states, to varying degrees. However, there is no public evidence (or a set of them) that links both of these assertions together that has been presented to the market. And that's what was needed for this market to resolve to YES.
As such, to the best of my knowledge and judgement, we did not find out in 2023 about a nation state using LLMs for generating propaganda messages. This prediction market thus resolves to NO.
@SantiagoRomeroBrufau there appears to be a bug with the resolution. It has been reported and being worked on.
I posted this 6d ago and did not get a response. I think the market should resolve Yes.
Per below, an organization devoted to countering misinformation (Newsguard) reported that a Chinese-run website used AI-generated text to push their misinformation narrative.
https://www.washingtonpost.com/technology/2023/12/17/ai-fake-news-misinformation/
“Since May, websites hosting AI-created false articles have increased by more than 1,000 percent, ballooning from 49 sites to more than 600”
https://www.newsguardtech.com/special-reports/ai-tracking-center/
“NewsGuard has so far identified 614 AI-generated news and information sites operating with little to no human oversight, and is tracking false narratives produced by artificial intelligence tools”
“there is strong evidence that the content is being published without significant human oversight. For example, numerous articles might contain error messages or other language specific to chatbot responses, indicating that the content was produced by AI tools without adequate editing.”
“NewsGuard analysts also identified a Chinese-government run website using AI-generated text”
@SantiagoRomeroBrufau We had an extensive discussion about the Chinese case. The rest are not related to governments.
Read something interesting related to the general question of states using AI:
China starting to offer officially-endorsed data sets for training LLMs (tweet).
From ImportAI:
An industry association operating under the Cyberspace Administration of China (CAC) has announced the availability of an officially-sanctioned dataset for training LLMs.
Dataset releases like this show how the Chinese government is wise to this issue and is proactively creating the means of production necessary for LLMs that reflect politically correct (aka Xi Jingping) thought.
This may seem quite distasteful to various people outside of China, but inside China this just looks like another form of AI alignment, bringing LLMs into the (state-forced) normative framework of the country.
@fb68 You might want to point to the specific incident/article that ought to resolve this.
@ThomasMurphy Isn't it...easier to get quality answers by overcommunicating? E.g. mentioning that the article in question is 20th on the list, or linking it, or linking the article reporting on it?
Anyway, it seems to me that someone not obviously state-affiliated generated some propaganda, and then a state-funded media reported on that with further propagandistic spin, but without overwhelming evidence that it generated anything itself. So, technically not a Yes.
(RT's original video seems not to cite the twitter source, but its screenshots are certainly from there.)
@BenjaminIkuta People use that term a lot recently. It could be
not enough Mana for markets closing end of the year
Mira ragequit balancing
Mana inflation
🤷
Thanks for the discussion @Joshua @RobertCousineau @Shump @jacksonpolack and others -
RT article doesn't count because while it is nation state sponsored, it is not a nation state using LLMs to generate propaganda
The 2nd article says
chatGPT can be used to support false narratives, including what appears to be an information operation by a Chinese propaganda outlet"
An LLM (ChatGPT) was featured in a video by a nation-state backed-entity but it was not used to generate propaganda message(s) and that's why it doesn't count.
@firstuserhere In addition, nothing ChatGPT said in the China Daily case is actually deceptive. It all seems to be reasonable true
@Shump “Deceptive” is not a requirement for something to be propaganda. It just needs to be “ideas, facts, or allegations spread deliberately to further one's cause or to damage an opposing cause” (Merriam-Webster definition)
Or even if we take the Oxford dictionary definition: “information, especially of a biased or misleading nature, used to promote or publicize a particular political cause or point of view.”
“especially” does not make it a requirement.
@SantiagoRomeroBrufau Cool but the market maker's definition overrides that, and FUH said in the earlier comments that it has to be deceptive.
@Shump I’m fairly new here, but is that really how this works? The market maker can redefine the terms halfway through the market trading, to their own definition of a key word? Doesn’t seem very reasonable.
@SantiagoRomeroBrufau It's not halfway through, it was his first comment. That's also not the thing disqualifying this stuff, but is important to note as part of the equation.
@Joshua The first comment is buried under mountains of other comments for anyone coming to the market. It seems much better to update the conditions on the description, and add a date/time for the clarification.