Will there be evidence of large scale data pollution operations by the end of 2025?
47
1kṀ3806
Jan 1
76%
chance

Considering that:


a) data pollution (large scale injection of AI generated data into the information space) and subsequent model collapse have been identified [1] as potential threats [2] for future LLM's and
b) advanced AI models will impact the geopolitical power distribution [3] and therefore be increasingly subject to geostrategic contention [4],

Do you believe that by the end of 2025, there will be evidence of large scale organized data pollution operations by state or non-state actors with the implicit or explicit goal of denigrating the performance of future LLM's taking or having taken place?

Resolution:

This market will resolve as YES if at any point before 01/01/2026 credible information will emerge that a deliberate data pollution operation by any actor (state/non-state) for any reason (geopolitical contestation, ideology, terrorism, lulz) has taken place.

Caveat: the operation must be/have been significant enough to warrant mention by a reputable news source (e.g. the NYT, WSJ, WP, BBC etc.), a government communication, a peer-reviewed scientific publication, a reputable threat intelligence service provider and/or other reputable sources not covered in this list.

I reserve the right to final judgement.

Get
Ṁ1,000
to start trading!
Sort by:
bought Ṁ95 YES

if there were large-scale text data pollution operations, you'd expect some metric like this to be going down or at least stagnating, not going up

Those things do happen, but they are kept out of the media to avoid the Streisand effect.

bought Ṁ250 YES

https://nightshade.cs.uchicago.edu/whatis.html I have seen people using this in the wild.

Is this enough to resolve yes? I'm surprised that story hasn't spread, but I haven't seen any contradicting evidence.

https://www.washingtonpost.com/technology/2025/04/17/llm-poisoning-grooming-chatbots-russia/

@SqrtMinusOne interesting. Gonna review at market close.

@SqrtMinusOne Hey, they’ve been using that for ages: feeding people disinformation in every possible way to fuel division (right–left lies, “democracy” lies, and so on).

They spend billions every year on bot farms, whole armies of bot-feeders, buying freelancers in India and Southeast Asia (they sell oil to India and get it paid back with propaganda spreading).

@traders any evidence of this thus far?

It’s briefly mentioned here and explored in more detail in this report.

predictedNO

@breck I somehow missed this comment. Thanks for the resources!

How are you resolving intent? Major social media sites have been astroturfed for years. Low quality disruption and nudging are indistinguishable from pollution.

predictedNO

@alexkropivny The decision will be made on the basis of best judgement, circumstancial evidence and market sentiment. I assume that no reputable source would report such an incident if plausible intent couldn't be established.

At 73%, this market feels overpriced. Notice there both needs sufficient activity as well as credible reports about it. Maybe the novelty effect would lead to reporting even relatively minor incidences, but they would still need to be detected somehow. Buying some NO shares.

I hope not

Technically trivial to do, and the incentives are there

predictedNO
Comment hidden
© Manifold Markets, Inc.TermsPrivacy