A deepfake is defined as any sophisticated AI-generated image, video, or audio meant to mislead. For this question to resolve positively, it must actually harm someone, not just exist. Valid forms of harm include but are not limited to costing someone money, or making some specific name-able person genuinely upset (not just “for all we know, people could have seen this and been upset by it”). The harm must come directly from the victim believing the deepfake, so somebody seeing the deepfake and being upset because the existence of deepfakes makes them sad does not count.
This is question #47 in the Astral Codex Ten 2023 Prediction Contest. The contest rules and full list of questions are available here. Market will resolve according to Scott Alexander’s judgment, as given through future posts on Astral Codex Ten.
Might hit newspapers tomorrow
https://twitter.com/SimonClarkeMP/status/1710979175589290434
Another occurrence for baseline tuning:
Fraud attempt with AI clone of CEO of online bank Bunq
Voice clones Scammers have imitated the voice and image of Bunq CEO Ali Niknam to extort money from his bank. The attempt failed.
Minor occurence for baseline tuning:
https://m.slashdot.org/story/419138
Researchers Including Microsoft Spot Chinese Disinformation Campaign Using AI-Generated Photos
https://www.nytimes.com/2023/08/30/business/voice-deepfakes-bank-scams.html seems like a strong resolution candidate 🙂
On the front page of 9/1/2023: https://static01.nyt.com/images/2023/09/01/nytfrontpage/scan.pdf
> Realistically this is a bad question because of articles like this where the deepfake itself isn't of major interest but is being used to make a wider point about deepfakes. I will probably be forced to count it anyway. (from here)
> A deepfake is defined as any sophisticated AI-generated image, video, or audio meant to mislead. For this question to resolve positively, it must actually harm someone, not just exist. (from question)
Another strong candidate is: https://www.nbcnews.com/tech/tech-news/deepfake-scams-arrived-fake-videos-spread-facebook-tiktok-youtube-rcna101415 which hit the frontpage according to archive :)
So apparently the Serbian pro-regime TV ran deepfakes of an opposition figure (Dragan Djilas and maybe others) in prime time news... Coverage in Czech: https://denikn.cz/minuta/1213887/ Let's see if that makes the front pages worldwide...
Scott Alexander clarifies "major" here:
8. In 2023 will a successful deepfake attempt causing real damage make the front page of a major news source? What's a major news source? E.g., would this count? What about TV news shows or radio programmes or news sites in general that don't exactly have "front pages"?
CBC seems major. I would count anything linked from the front page of their website, ie cbc.ca. Realistically this is a bad question because of articles like this where the deepfake itself isn't of major interest but is being used to make a wider point about deepfakes. I will probably be forced to count it anyway.
If this proves to be a deepfake, then we have a YES 🤣
Once again, the definition of "major" used by Scott is gonna be important. Is India Today major? https://www.indiatoday.in/technology/news/story/kerala-man-loses-rs-40000-in-ai-based-deepfake-whatsapp-fraud-all-about-the-new-scam-2407555-2023-07-17 (on the front page on July 17 https://web.archive.org/web/20230717070009/https://www.indiatoday.in/ )
@MartinModrak India Today is a major news magazine in India, but I really don't think it counts as "front page" news to be buried deep within their revolving homepage. They print a weekly magazine, was this article referenced on the cover?
@MartinModrak For the "front page of major news source" part - this was reported by Reuters https://www.reuters.com/technology/deepfake-scam-china-fans-worries-over-ai-driven-fraud-2023-05-22/ and reached the front page of the Technology section http://web.archive.org/web/20230523085410/https://www.reuters.com/technology/ (disable Javascript, or it gets weirdly reloaded). As well as main page of Gizmodo (http://web.archive.org/web/20230522171439/https://gizmodo.com/) So it seems to hinge on the definition of "major". Anyway plenty of time to get a clear cut case this year...
This looks borderline
@BTE I think this is far from what was intended by this question. Far more convincing fakes have been made without AI long before anyone came up with the word "deepfake." There's absolutely no indication that the creator did anything beyond type "black smoke near Pentagon" into a garden-variety image generator. They didn't even bother to crop out the most obvious error (though later coverage linked in comments below did, which should tell you something), nevermind anything more complicated than that. If this is a deepfake, the definition of "deepfake" has stretched so far as to become meaningless.
And that's without even getting into whether a minor blip in stock markets counts as "causing real damage." The S&P 500 dipped 0.3% according to the AP story linked in comments below, but the S&P 500 overall rose that morning before falling in the afternoon after the story was widely reported as being a hoax. It ended up essentially flat for the day, going from 4193.9 to 4192.63. Whatever effect this picture had was dwarfed by normal volatility in the days before and after it was released.
https://www.youtube.com/watch?v=7mWXhGfTyso&ab_channel=WUSA9
Others have posted this one specifically, I think this is enough to resolve YES
@ShadowyZephyr Others are debating whether jumpy stock markets count as "real harm." I think it's also questionable whether this counts as a "deepfake attempt," never mind a "successful" one. When people worry about deepfakes, it's worry about something new, beyond what Photoshop has already been able to do for decades--fakes that can't be easily detected, maybe that can't be detected at all. This picture, while AI-generated, is less convincing than even an amateur photoshop job would be. If someone was serious about causing harm by attempting a deepfake, at the very least they would Photoshop in a more convincing view of the actual Pentagon. Even the screenshot you posted above cropped out the crowd barriers fading into fencing from the original--why didn't the original poster do that at least?
The idea of a deepfake implies regular, more shallow fakes, and if this isn't a shallow fake, I don't know what is.
@BenjaminShindel I mean, it's a statistical fact that someone is losing money when a market of this size fluctuates. But I can't point to anyone to say, "That person, they lost the money." I also can't say precisely how much fluctuation was due to the image, if any. It's just very, very strongly correlated.
@BenjaminShindel at least everybody who traded based on that information lost money, even without considering derivatives
@CodeandSolder That's not true at all? If anything, they likely made money at the expense of everyone who didn't trade based on that information (although I don't think that's really a fair way of assessing the situation either).
@TylerColeman ya but also like, the market moved by a fraction of a percent and then went back up...? It's unclear that even someone that traded a LARGE amount of money would have lost money as a result of this? It was also likely the same people selling and then buying when they realized it was fake?
@BenjaminShindel Well, depending on where in the graph they traded, but in general money changed hands and thus somebody lost.
@CodeandSolder the index constantly moves by fractions of a percent all the time. The causality relationship between the fluctuation and these fake news is tenuous at best.
@Odoacre that causality is accepted by respected publications:
https://www.insider.com/ai-generated-hoax-explosion-pentagon-viral-markets-dipped-2023-5