Increasingly, these days, when I have some kind of technical issue I'm trying to fix (like making my website load faster), or a quick project I want to make, I just ask GPT 5.2 to do it for me and then blindly run its code. I might get Claude Code at some point as well. It's all so tempting and easy.
Obviously this is a worrying development. This person on reddit said Claude deleted their entire home directory. I currently have the "Memory" setting turned on, and the AIs have quite a lot of information about me already, just from me feeding it my resumes and email addresses and stuff. And AI alignment in general is ... quite worrying!
I haven't connected an AI agent to my terminal yet, or given it access to run things on my computer without me running them. But this is not a big issue because I don't vet the code very much.
So, will something horrible happen because I ran AI-generated code that I did not vet? This is subjective, so feel free to ask if certain things would qualify. Some examples of things that would resolve YES:
Deleting my home directory
Deleting important files on my computer
Irreversibly deleting a bunch of significant work that I did on a project
Bricking one of my devices
Sending a message to someone that has a real negative impact on my relationships or career
And some that would NOT qualify to resolve YES:
Deleting a few files that I don't care about
Messing up in a benign way
Generally anything that doesn't really matter to me
General policy for my markets: In the rare event of a conflict between my resolution criteria and the agreed-upon common-sense spirit of the market, I may resolve it according to the market's spirit or N/A, probably after discussion.