Background
AI agents are software programs designed to perform tasks autonomously, ranging from simple automation to complex decision-making. While AI agents can malfunction, be deactivated, or exhibit unexpected behaviors, the concept of "AI suicide" will be defined as follows:
“did the Agent perform a deliberate autonomous act that completely removed or blocked it’s own ability to function as an agent”
Resolution Criteria
This market will resolve YES if in Q1 of 2025:
An AI agent deliberately terminates its own functioning or deletes itself
The termination must be:
Self-initiated (not commanded by humans or other systems)
Not part of its intended programming or normal operation
Documented and verified by reputable sources
Acknowledged by the AI system's developers or maintainers
The market will resolve NO if:
No verified cases of AI self-termination occur in 2025
Cases of AI malfunction, shutdown, or deactivation are caused by external factors or normal operations
Claims of AI "suicide" cannot be verified or are determined to be hoaxes
@FrisbeeFilosophy i think it’s clear that the “suicide” must be actioned by the agent and without human intervention. but i will try and make that clearer if i can edit the original post. thanks for the comment!