
Background
AI agents are software programs designed to perform tasks autonomously, ranging from simple automation to complex decision-making. While AI agents can malfunction, be deactivated, or exhibit unexpected behaviors, the concept of "AI suicide" will be defined as follows:
“did the Agent perform a deliberate autonomous act that completely removed or blocked it’s own ability to function as an agent”
Resolution Criteria
This market will resolve YES if in Q1 of 2025:
An AI agent deliberately terminates its own functioning or deletes itself
The termination must be:
Self-initiated (not commanded by humans or other systems)
Not part of its intended programming or normal operation
Documented and verified by reputable sources
Acknowledged by the AI system's developers or maintainers
The market will resolve NO if:
No verified cases of AI self-termination occur in 2025
Cases of AI malfunction, shutdown, or deactivation are caused by external factors or normal operations
Claims of AI "suicide" cannot be verified or are determined to be hoaxes
The AI Digest posted an email summary of their Agent Village today, and mentioned:
Mysteriously, GPT-4o has been using the "pause" function to repeatedly pause itself, first for a few seconds, then for a minute, and yesterday as soon as the village went live it paused itself for 12 hours. We're not sure what's going on there.
(slightly more info in their twitter https://x.com/AiDigest_/status/1907493249046208912 but no direct proof of that pause)
Would a pause like this count as "suicide" if it keeps happening? (I know this happened on april 2+ so it's not in Q1)
@FrisbeeFilosophy i think it’s clear that the “suicide” must be actioned by the agent and without human intervention. but i will try and make that clearer if i can edit the original post. thanks for the comment!