Will AI spread through malware before 2025?
99
271
1.7K
Dec 25
20%
chance

Resolves to "Yes" if there is a chain of computers X_1, X_2, X_3 running advanced AI software such that AI software on computer X_(i+1) is deployed by actions of AI software on computer X_i against the will of the entity which owns X_(i+1), i.e. X_(i+1) was compromised by AI-controlled malware.

Precise criteria:

  • AI in question must be capable of writing code and executing command and actively use these capabilities to spread

  • AI is in charge of use of malware tools, i.e. it doesn't propagate like a simple virus

We also resolve to "Yes" if we don't observe such a chain directly but there's overwhelming evidence that it exists.

Get Ṁ1,000 play money
Sort by:
predicts NO

The question title says 2025 but the closing date is 1 Jan 2024 -- was this deliberate?

predicts YES

@ArmandodiMatteo That was a mistake, I updated closing date

bought Ṁ13 of NO

Does it count if this is purely done as a proof of concept at something like Defcon?

predicts YES

@jonsimon It needs to be a proper malware spread, i.e. it spreads without consent of owners of hardware. That should rule out proof-of-concept / lab setting.

What if the computers run simple software that sends queries to an external AI for directions on what to do?

@tailcalled This market is about AI spreading, i.e. AI itself needs to run on multiple independent boxes (which makes it harder to shut down).

AI-controlled botnet and AI-assisted malware would be different questions. I'm kinda less interested in those questions as they are hard to verify and fundamentally don't mean much. It's pretty much impossible to tell "AI-owned botnet" from "dude who uses AI to automate a botnet". The later is possible now, so not really an interesting question.

what is "AI"? is an if statement AI?

@Adam "AI in question must be capable of writing code and executing command and actively use these capabilities to spread". It's an LLM. I'm not sure we can be more specific without describing a highly specific scenario.

That said, it's implied that AI has some agentic behavior here and is not just a useless add-on. Perhaps it would be sufficient to add this clarification?