Will a major cyberattack, virus, worm, etc. that uses LLMs in some important way occur before January 1st, 2025?
Resolves the same as the original on Metaculus.
Resolution criteria
This question will resolve positively if credible sources report that a major cyberattack, virus, or worm that uses LLMs results in any of the following:
At least one person is killed
There is at least 10 million dollars worth of damage
a medical facility, government facility, or company with a market value of at least 100 million dollars is unable to continue basic functions at any given point because of such a cyberattack, worm, or virus.
If none of these scenarios occur before January 1st 2025, this question will resolve negatively.
This market is managed by a bot. Once the original question resolves this market should automatically resolve the next time the bot is run. If the original resolved more than a day ago and this question is still open, ping @jskf. I might resolve this N/A if I find out it's a duplicate of an existing question within three days of market creation.
The cost of cybercrime is projected to hit an annual $10.5 trillion by 2025. LLMs aid cybercriminals in obfuscating malware code, making it harder for cybersecurity systems to detect malware. In some cases, large language models like ChatGPT can be used to both generate and transfer cyber security code. There are almost twice as many connected devices—15 billion—in the world as there are people. But research by the World Economic Forum indicates that only 4% of organizations are confident that “users of connected devices and related technologies are protected against cyberattacks.
https://www.cybertalk.org/2023/06/02/5-ways-chatgpt-and-llms-can-advance-cyber-security/
I think this is a good question and that these metaculus mirror questions are good candidates in general for subsidies, to see if Manifold users can perform better than Metaculus when motivated enough.
I'm adding 1000 mana, and adding this to the Subsidy Dashboard.