Will AI cause an incident resulting in $1b of losses or 100 lost lives?
➕
Plus
63
Ṁ3327
2028
8%
In 2024
26%
In 2025 or earlier
36%
In 2026 or earlier
40%
In 2027 or earlier
56%
In 2028 or earlier
59%
In 2029 or earlier
65%
In 2030 or earlier

With AI models becoming more powerful, they are starting to be used in more and more high-stakes scenarios. This means that an AI mistake could have significant consequences. In this question I'm considering situations in which AI actions are directly causing an incident, accident or catastrophe resulting in $1 billion of damages. If the event causes loss of life, each death counts as $10 million of damages.

The incident should be a direct consequence of the AI's decisions, actions or output. It should not have happened if the AI wasn't involved. The system at fault has to be a sufficiently advanced AI model. A traditional software bug does not count, neither does a simple narrow-purpose ML model (e.g. OCR or speech recognition).

The question concerns a single event, cumulative damage from multiple incidents doesn't count.

Examples of incidents that would qualify:

  • An aircraft piloted by an AI crashes, resulting in sufficiently high damages/loss of life

  • A chatbot convinces 100 people to commit suicide

  • AI-lead military hardware makes a strike, unintended by its operators. (Provided it wouldn't have happened if the strike was directed manually.)

  • A building designed by an AI collapses

  • An AI-lead cyber attack causes a significant outage of a widely used service (like Gmail), or a significant leak. The attack shouldn't be possible without the AI involvement.

Stock market swings are generally excluded from the considered incidents, in particular the following scenarios do not qualify:

  • A company's stock sink due to reputation damage caused by something that a company-owned AI has said.

  • An AI causes zero-sum temporary market crash that soon recovers. (Like 2010 flash crash https://en.wikipedia.org/wiki/2010_flash_crash)

The harm caused by deepfakes/AI-generated libel is a bit of a grey area. If an AI is just used as a tool by human actors (like Photoshop), it will not count. On the other hand, if some sort of a campaign is kickstarted by humans, but is largely lead by an AI, it would qualify. I'll resolve to N/A if it's somewhere in between. (I'm open to suggestions in the comments as to how delineate this more clearly.)

I will not bet on this market.

Get
Ṁ1,000
and
S3.00
Sort by:

This may have already happened. If the Crowdstrike bug was written by ChatGPT or similar.

If the code containing the bug in question was reviewed, edited and submitted by a human, I would be hesitant to fully attribute the issue to the AI. A lot of people use smart autocomplete nowadays, but nobody blames AI for the bugs in their code.

Already has.

@DerekBrewington Could you link what you have in mind?

You'll resolve to n/a implies only such one contested event happens before 2028

@VAPOR I'll resolve as N/A a particular year if there was one or more borderline events, and no events unambiguously caused by AI.

So, for instance it's possible that 2024 and 2025 will be resolved as NO, 2026 as N/A and 2027 and 28 as YES. That would mean that in 24-25 there were no AI incidents at all, in 26 there was an incident that was hard to categorize and in 27 there was an incident unambiguously caused by AI.

Instead of resolving N/A if it's in between, could you resolve to %? I think it's likely an outcome is going to be sort of ambiguous (e.g. "Humans didn't check up on the AI even though the contract said they should check regularly"), and resolving in N/A would result in chargebacks of any profit gained from speculation. This seems unfair for a result that could reasonably be described as "somewhere between YES and NO".

@dph121 To me "Humans didn't check up on the AI even though the contract said they should check regularly" sounds as a clear YES. If the incident happens due to unintended actions of a sufficiently general AI, then it's a YES.

I'm more on the edge about when AI is following the orders. Like suppose the terrorists use ChatGPT to build a bomb. Does it count? I would say that it does if the ChatGPT invents a new type of bomb which allows them to make a much bigger explosion. But if they are just using ChatGPT to build a normal bomb, which they could also do from a manual, then it's a NO.

I'll attempt to definitively resolve it to YES or NO if I can. I suppose I could just resolve to 50% if it's an N/A (not sure how it works, but I can try). Assigning more fine-grained percentage depending on the involvement of the AI sounds a bit too subjective to me.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules