Stable Diffusion is an ML model for image generation (comparable to DALL-E and Midjourney) that was just released publicly. One common concern about these image generation models is that they can be and have been used to create realistic fake images (deepfakes) which are often deceptive or misrepresent a real person. For more, see:
Resolves YES if an article about a deepfake created by Stable Diffusion is on the front page of https://www.cnn.com/ by the end of 2022, and I am made aware of it (e.g. if a screenshot or web archive snapshot of it is linked in the comments of this market. I have also set up a google news alert and I will post any such article if I notice it.) NO otherwise.
Only an article primarily about a specific deepfake or set of deepfakes will count; an article about the dangers of hypothetical deepfakes or deepfakes in general would have no bearing on resolution. The article must describe the generated imagery as a "deepfake" to count for YES resolution (for example, AI-generated artwork that is not described as a "deepfake" would not count).
π Top traders
# | Name | Total profit |
---|---|---|
1 | αΉ111 | |
2 | αΉ56 | |
3 | αΉ40 | |
4 | αΉ27 | |
5 | αΉ23 |