Deepfake technology is advancing, and recent posts from Argil.AI claims videos can be generated in a matter of minutes/clicks:
The output is still not perfect but if it's as easy to create these videos as Argil.AI claims, we're likely not far from a lot of deepfake drama.
This market resolves for a (non-political) celebrity scandal or drama due to a video later proven to be a deepfake. The video should be at least 30 seconds long, widely circulated by corporate media outlets, and accepted as being genuine. Mainstream celebrities with a wikipedia page, as a baseline.
If the video is reliably proven to be AI-generated - not speculation/theory in the general population - in 2024, resolves Yes. If a video circulates in 2024 but the proof does not come before the year ends, resolves No even if it's later shown to be a deepfake. "Scandal" or "drama" is somewhat subjective, but there must be a negative impact on the celebrity specifically because of the content of the video.
(tbc: the video does not need to be made with Argil.AI technology - any AI deepfake counts)
🏅 Top traders
# | Name | Total profit |
---|---|---|
1 | Ṁ628 | |
2 | Ṁ120 | |
3 | Ṁ78 | |
4 | Ṁ67 | |
5 | Ṁ15 |
"In fact, Pfefferkorn thinks as AI technology proliferates, it's more likely that courts will confront accusations of fakery against real evidence than attempts to introduce fake evidence."