Will I (Peter Wildeford) subjectively think there was a disaster caused by non-aligned AI in 2023?
64
280
2.2K
resolved Jan 1
Resolved
NO

Many are worried about risks from AI malfunctions. While my concern is primarily about existential risks that could result in the extinction or disempowerment of humanity by AI, I am also interested in relatively smaller scale disasters.


In this question, I will use my subjective judgement to decide if there has been an "disaster caused by non-aligned AI" in 2023.

The "disaster" must have occurred after market creation.

"A disaster caused by non-aligned AI" would be any significant loss of life, economic damage, or other damage caused by an AI system functioning unexpectedly. "Significant" is at my sole subjective discretion, but I think even one death would count.

An example could be if an AI system autonomously driving cars caused hundreds of deaths which would have been easily avoidable for human drivers, or if an AI system overseeing a hospital system took actions to cause patient deaths as a result of misinterpreting a goal to minimise bed usage.

If, for example, the Boeing MCAS system had been an AI system and there was no possibility for the pilots to override its decision to lower the aeroplane nose, leading to a fatal crash, this would count for resolution. However, that doesn't actually count because it was not an AI system.

An AI system successfully hacking or blackmailing would count, assuming it was not directed by an human explicitly to do so.

An AI system being used in warfare and causing deaths in the course of its expected behaviour is an example of something which should not count.

A system should be considered AI if it is widely considered to be AI (e.g. by the credible media reports resolving the question). If this is not sufficiently clear for resolution, then as a secondary criterion, any system using machine learning techniques which has an agentic role in the disaster in question should count for this question.

I will rely on my subjective judgement to evaluate the credibility of cases. In the case this question is to resolve, I will allow 48 hours of discussion before resolving.

I will not personally be trading on this market because it relies on my subjective judgement.

Get Ṁ200 play money

🏅 Top traders

#NameTotal profit
1Ṁ1,853
2Ṁ64
3Ṁ37
4Ṁ32
5Ṁ27
Sort by:
bought Ṁ20 of YES

@MarcusAbramovitch I just added "The disaster must have occurred after market creation" to make this go away with certainty, but it really does feel like an edge case to me and overall I lean against this causing a YES resolution if it were to repeat again. This is because the intention of this question is to be less "people do crazy things and AI might be involved in that" and more towards "AI does crazy things" and this feels like the former.

Like if when Bing/Sydney told Kevin Roose to leave his wife and Kevin actually did that, I attribute that much more to Roose hypothetically being an idiot in that scenario than the LLM, and that would not be a YES resolution here. But if the LLM were relentlessly and persistently persuasive in causing the scenario to occur in a way that it seems like isn't just because the person in the situation is acting extremely crazy - I think that would be a YES resolution. Does that make sense?

bought Ṁ20 of YES

@PeterWildeford I think so? Obviously this is hard to judge. Let me lay out a few cases and tell me how you would rule.

1. Company A uses some commercialized LLM to write a bunch of code for them to cut costs and they didn't check it super well and they pushed it to production and it
A) Caused a $10M economic loss
B) Caused 5 deaths as a result
C) Caused them to violate a serious law (not some bullshit misdemeanor, something reasonable people agree should be illegal).
D) Leaked a bunch of private data

2. Teenagers chat with AI and it makes them start the tumeric challenge (similar to the cinnamon challenge or the tidepod challenge) which goes viral and people die from it.

3. People use GPT for diagnosing and treating medical ailments. You don't know what they'd have done otherwise but 10 people die and also this probably saved 100 other people's lives.

@MarcusAbramovitch

(1) all of those -> yes

(2) lean no, but could be yes ...depends on the details I think and how much I think the AI really drove this to happen and how negligent / idiotic I think the teenagers were.

(3) lean no, but could be yes if the people would not be negligent / stupid in believing LLM diagnoses (e.g., the LLM was clearly set up to be an authoritative medical source and actually intended to make diagnoses but messed up) and I think I'd want it to be clear in this scenario that there was overall more harm than benefit (whereas your scenario implies more benefit than harm).

Sound good?

I added M500 in liquidity just now

More related questions