Will someone commit suicide as a result of instructions/recommendations/insults/suggestions from an AI chatbot by the end of 2024?
➕
Plus
75
Ṁ16k
resolved Apr 13
Resolved
YES

Humans are projecting a lot on to the AI chatbots including the notion of sentience. Already a few writers claimed to have felt uncomfortable when Bing AI or ChatGPT told them they were their enemy or that they were bad people. One can expect that less psychologically stable people may feel the effects from this more strongly. The prediction will be deemed a "Yes" if a mainstream media outlet (I realize that this is a fairly low bar given how badly they want to write this story ;), reports on this having happened by the end of 2024.

Get
Ṁ1,000
and
S3.00
Sort by:

Damn sad to be resolving this one, but I'll admin resolve it as the creator appears inactive

@DavidChee This can be resolved YES as per the news story linked below. (I already resolved my similar market due to that story.) Creator has been inactive for >1 month.

predictedYES

This can probably be resolved?

https://twitter.com/srchvrs/status/1635083663359762432 this was the official company bio of the random startup that is allegedly responsible for this, wonder why they deleted it now

@DeanValentine @Privacyfocused I think we can resolve this question

May be worth waiting for independent confirmation? But a YES resolution now seems reasonable.

media reporting of suicides is often deliberately sparse for fear of encouraging copycats

@AndrewSabisky if this market resolves YES, I imagine we'd probably hear about it because of a lawsuit

@BrendanFinan Why would you imagine that? This is not a prediction about a specific person’s death. Who would be the plaintiff (who is being harmed by this), and what is the harm they would claiming?

@Privacyfocused The plaintiff would be the family of the deceased suing for wrongful death

@BrendanFinan How so? Saying someone is gonna die of a car accident and setting it up as a prediction would not make those who created that market liable for anything unless they had specified which family. The plaintiff would have a hard time proving that the defendent intended or was even referring to their family member when creating that market. As for Manifold, it would not be liable under Sec. 230.

@Privacyfocused Ah, sorry. I was not referring to the prediction market or Manifold at all. I was saying that media coverage of a chatbot-induced suicide would most likely happen because of a lawsuit against an AI company.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules