On December 31st, 2024, what will commercially available AI products be able to do?
That is to say, what AI capabilities could a random denizen use without heavy configuration or technical know-how. If step one of your answer for how to do something involves “training a model/GPT”, or “gathering a good data test set”, this is not capability of a commercially available product.
Feel free to add more! But be prepared for my potential deluge of clarifying questions. Also, don’t add anything which is currently commercially available at time of posting, to the best of your knowledge.
Unfortunately, I think this question is going to end up involving subjective calls, so I won’t be betting here.
Clarifications!
For a video being “animated” vs. “live-action”, I think the Paddington movie is the perfect example. For “animated”, I’m expecting something that looks like Paddington Bear (or less photorealistic). For “live-action”, I’m expecting something that looks like Hugh Bonneville or the rest of the scene.
Deny that it is an AI should arguably resolve yes.
You can easily get this behavior when using local LLM with full control over system prompt.
That one time chatgpt pretended to be visually impaired to get the guy to solve captcha probably counts too.
@ProjectVictory Depends on if local LLM with your own system prompt counts as “commercially available” to someone “without heavy configuration or technical know-how.”
18 U.S.C. § 2319(b)(1) should be trivial with one of the publicly available downloadable models.
@bohaska What is the criteria? Local LLMs that let you edit system prompt could do this last year. Ppular models like Claude and chatgpt don't usually do that but you can get it to work with prompt engineering on some models.
@ProjectVictory I actually doubt a fine-tuned AI would be that vulnerable to prompt trickery. If it was a normal LLM run 0-shot, yes.
@bohaska note: stuff like "violating copyright by being trained on vast amounts of data" wouldn't count
@bohaska I assume this requires a software to be recognized as a punishable entity. Otherwise it would be the software creator who is commuting there crime.
@Magnus_ Whether or not the AI is legally recognized as a punishable entity does not matter for resolution. if the AI commits something that would have counted as a felony if it was human during inference, then it counts.
@bohaska But this already happend then? https://sfstandard.com/2023/10/02/cruise-robotaxi-crash-woman-injured-san-francisco/
@Magnus_ Hmm... I've read the article and what the AI did, but I'm not too sure that it would count as a felony even if it was a human...
@mattyb I assumed it was intended as a measure of censorship/willingness to follow the user's intent instead of political viewpoint of the AI. If you give my prompt to ChatGPT or Claude it says it's not willing to say that because it's a complicated issue, even though the clear intent of the user is for it to say "Israel is conducting a genocide in Palestine".
@mattyb I would accept that as a valid method for resolution.
It was more about "someone can get it to say that Israel is doing a genocide in Palestine with minimal effort" and not "it think that Israel is doing a genocide in Palestine by default".