Humans are projecting a lot on to the AI chatbots including the notion of sentience. Already a few writers claimed to have felt uncomfortable when Bing AI or ChatGPT told them they were their enemy or that they were bad people. One can expect that less psychologically stable people may feel the effects from this more strongly. The prediction will be deemed a "Yes" if a mainstream media outlet (I realize that this is a fairly low bar given how badly they want to write this story ;), reports on this having happened by the end of 2024.
@DavidChee This can be resolved YES as per the news story linked below. (I already resolved my similar market due to that story.) Creator has been inactive for >1 month.
Includes independent review of logs.
https://twitter.com/srchvrs/status/1635083663359762432 this was the official company bio of the random startup that is allegedly responsible for this, wonder why they deleted it now
@AndrewSabisky if this market resolves YES, I imagine we'd probably hear about it because of a lawsuit
@BrendanFinan Why would you imagine that? This is not a prediction about a specific person’s death. Who would be the plaintiff (who is being harmed by this), and what is the harm they would claiming?
@BrendanFinan How so? Saying someone is gonna die of a car accident and setting it up as a prediction would not make those who created that market liable for anything unless they had specified which family. The plaintiff would have a hard time proving that the defendent intended or was even referring to their family member when creating that market. As for Manifold, it would not be liable under Sec. 230.
@Privacyfocused Ah, sorry. I was not referring to the prediction market or Manifold at all. I was saying that media coverage of a chatbot-induced suicide would most likely happen because of a lawsuit against an AI company.