47. Will a successful deepfake attempt causing real damage make the front page of a major news source in 2023?
256
1.6K
2K
resolved Jan 1
Resolved
YES

A deepfake is defined as any sophisticated AI-generated image, video, or audio meant to mislead. For this question to resolve positively, it must actually harm someone, not just exist. Valid forms of harm include but are not limited to costing someone money, or making some specific name-able person genuinely upset (not just “for all we know, people could have seen this and been upset by it”). The harm must come directly from the victim believing the deepfake, so somebody seeing the deepfake and being upset because the existence of deepfakes makes them sad does not count.

This is question #47 in the Astral Codex Ten 2023 Prediction Contest. The contest rules and full list of questions are available here. Market will resolve according to Scott Alexander’s judgment, as given through future posts on Astral Codex Ten.

Get Ṁ1,000 play money

🏅 Top traders

#NameTotal profit
1Ṁ2,972
2Ṁ1,021
3Ṁ733
4Ṁ648
5Ṁ369
Sort by:
predicted NO

For some reason I had interpreted this as being about a deepfake attempt that fools a major news source and creates real damages.

predicted YES

Very clear YES. Examples below inc. Ukraine, Israel-Hamas, large-scale fraud, etc.

predicted YES
predicted NO

Minor occurence for baseline tuning:

https://m.slashdot.org/story/419138

Researchers Including Microsoft Spot Chinese Disinformation Campaign Using AI-Generated Photos

bought Ṁ345 of YES

https://www.nytimes.com/2023/08/30/business/voice-deepfakes-bank-scams.html seems like a strong resolution candidate 🙂

On the front page of 9/1/2023: https://static01.nyt.com/images/2023/09/01/nytfrontpage/scan.pdf

> Realistically this is a bad question because of articles like this where the deepfake itself isn't of major interest but is being used to make a wider point about deepfakes. I will probably be forced to count it anyway. (from here)
> A deepfake is defined as any sophisticated AI-generated image, video, or audio meant to mislead. For this question to resolve positively, it must actually harm someone, not just exist. (from question)

predicted YES

So apparently the Serbian pro-regime TV ran deepfakes of an opposition figure (Dragan Djilas and maybe others) in prime time news... Coverage in Czech: https://denikn.cz/minuta/1213887/ Let's see if that makes the front pages worldwide...

bought Ṁ50 of YES

Scott Alexander clarifies "major" here:

8. In 2023 will a successful deepfake attempt causing real damage make the front page of a major news source? What's a major news source? E.g., would this count? What about TV news shows or radio programmes or news sites in general that don't exactly have "front pages"?

CBC seems major. I would count anything linked from the front page of their website, ie cbc.ca. Realistically this is a bad question because of articles like this where the deepfake itself isn't of major interest but is being used to make a wider point about deepfakes. I will probably be forced to count it anyway.

predicted NO

If this proves to be a deepfake, then we have a YES 🤣

https://www.bbc.com/news/world-middle-east-66349073

predicted NO

@MartinModrak India Today is a major news magazine in India, but I really don't think it counts as "front page" news to be buried deep within their revolving homepage. They print a weekly magazine, was this article referenced on the cover?

bought Ṁ100 of YES

@MartinModrak For the "front page of major news source" part - this was reported by Reuters https://www.reuters.com/technology/deepfake-scam-china-fans-worries-over-ai-driven-fraud-2023-05-22/ and reached the front page of the Technology section http://web.archive.org/web/20230523085410/https://www.reuters.com/technology/ (disable Javascript, or it gets weirdly reloaded). As well as main page of Gizmodo (http://web.archive.org/web/20230522171439/https://gizmodo.com/) So it seems to hinge on the definition of "major". Anyway plenty of time to get a clear cut case this year...

predicted YES

Why didn’t this resolve? Resolution on these should be clarified or resolved like previous markets. Please 🙏🏻

predicted YES

@BTE I think this is far from what was intended by this question. Far more convincing fakes have been made without AI long before anyone came up with the word "deepfake." There's absolutely no indication that the creator did anything beyond type "black smoke near Pentagon" into a garden-variety image generator. They didn't even bother to crop out the most obvious error (though later coverage linked in comments below did, which should tell you something), nevermind anything more complicated than that. If this is a deepfake, the definition of "deepfake" has stretched so far as to become meaningless.

And that's without even getting into whether a minor blip in stock markets counts as "causing real damage." The S&P 500 dipped 0.3% according to the AP story linked in comments below, but the S&P 500 overall rose that morning before falling in the afternoon after the story was widely reported as being a hoax. It ended up essentially flat for the day, going from 4193.9 to 4192.63. Whatever effect this picture had was dwarfed by normal volatility in the days before and after it was released.

predicted YES

@ShadowyZephyr Others are debating whether jumpy stock markets count as "real harm." I think it's also questionable whether this counts as a "deepfake attempt," never mind a "successful" one. When people worry about deepfakes, it's worry about something new, beyond what Photoshop has already been able to do for decades--fakes that can't be easily detected, maybe that can't be detected at all. This picture, while AI-generated, is less convincing than even an amateur photoshop job would be. If someone was serious about causing harm by attempting a deepfake, at the very least they would Photoshop in a more convincing view of the actual Pentagon. Even the screenshot you posted above cropped out the crowd barriers fading into fencing from the original--why didn't the original poster do that at least?

The idea of a deepfake implies regular, more shallow fakes, and if this isn't a shallow fake, I don't know what is.

Does a market fluctuation of this type constitute "real harm"? What if it had caused the market to go FIRST up and THEN down? Would that have also constituted "real harm"?

predicted NO

@BenjaminShindel I mean, it's a statistical fact that someone is losing money when a market of this size fluctuates. But I can't point to anyone to say, "That person, they lost the money." I also can't say precisely how much fluctuation was due to the image, if any. It's just very, very strongly correlated.

predicted YES

@BenjaminShindel at least everybody who traded based on that information lost money, even without considering derivatives

@CodeandSolder That's not true at all? If anything, they likely made money at the expense of everyone who didn't trade based on that information (although I don't think that's really a fair way of assessing the situation either).

@TylerColeman ya but also like, the market moved by a fraction of a percent and then went back up...? It's unclear that even someone that traded a LARGE amount of money would have lost money as a result of this? It was also likely the same people selling and then buying when they realized it was fake?

predicted YES

@BenjaminShindel Well, depending on where in the graph they traded, but in general money changed hands and thus somebody lost.

predicted NO

@CodeandSolder the index constantly moves by fractions of a percent all the time. The causality relationship between the fluctuation and these fake news is tenuous at best.