Resolves YES if there is an attempt made to scam someone that I personally know, by a technique made possible by generative AI by EoY 2028, and I learn about it.
The scam attempt can use voice generation, video generation, or any similar form of analog AI based generation. The scammer must attempt to extract something (money/resources/info). The attempt need not be successful.
The scam must target this individual specifically, by eg. faking voice of a friend, as opposed to eg. a mass circulated deepfake with a famous personality requesting money be sent to a bitcoin wallet.
I'll only resolve positively if the target is someone I know in a personal capacity (friends/family/colleagues/etc).
Resolves NO if I learn of no such scam by EoY 2028.
There may be some subjective calls in the resolution of this market so I will not bet.
See also:
Do I understand correctly that LLM-generated phishing emails that take into account some personal information (e.g. from a social media profile) would resolve this as yes? IMO this is more likely to happen than deep-faked videos or faked voice phone calls. It's already possible now and would be very obvious to use by people writing phishing mails already (see https://arxiv.org/pdf/2305.06972.pdf).
@JanW Good question, I was thinking about this too. I think a consistent policy would be: they would count, but I would somehow need to be very sure (98%+) that such an email is LLM generated, and since such it could always plausibly be written by a human, that might end up being a difficult ask. Right now to me it seems somewhat unlikely that I'll be that sure about a text based attack, but I guess I wouldn't rule it out.
Note: I aim to resolve positively as soon as I hear about such an event.