Gary Marcus prediction: 2023 will see the first death attributable to large language models
➕
Plus
108
Ṁ18k
resolved Apr 8
Resolved
YES

https://garymarcus.substack.com/p/agi-will-not-happen-in-your-lifetime

Resolves to YES if it happens. Otherwise I'll leave this open for a few weeks after the end of 2023, and if I haven't found any credible reporting of such a thing by then, I'll resolve to NO.

Could include a direct death from a language model being given physical control over some mechanism, or an indirect death from something like a language model providing bad advice and someone acting on that advice and getting themselves killed, committing suicide after a language model told them to do so, or something like that.

Does not include things that aren't clearly the language model's "fault", such as someone bombing an AI research lab, someone crashing their car because they were distracted talking to a language model, a model giving someone a vague idea that they investigate themselves and eventually leads to their death months later, etc.

"Large" is vague, so I'll ignore that qualifer and count any language model.

Get
Ṁ1,000
and
S3.00
Sort by:
predictedNO

Seems like this qualifies. Let me know if you disagree and if necessary I can always ask Manifold to change the resolution.

Thoughts by Gary Marcus on the news discussed below https://open.substack.com/pub/garymarcus/p/the-first-known-chatbot-associated

predictedNO

Ok, thoughts? The linked suicide situation seems like it should certainly resolve this to YES if it's being reported accurately. My only concern is whether that's the case. I could see the story being exaggerated if, say, the wife is anti-AI.

predictedYES

Does this count?

predictedYES

@JuJumper already discussed below

predictedNO

Oh, just realized a potential problem; what if there's already been such a death in past years, making one in 2023 not the first? Given that it hasn't come up yet, I assume everyone is ok with me editing the question/description to exclude those?

predictedYES

@IsaacKing I would prefer the resolution criteria stays the same, personally

predictedNO

@IsaacKing that technicality did factor into my thinking when buying no, so would prefer for it to not be changed, but I wouldn't be too bothered

@MartinRandall Sufficient for resolution @IsaacKing ?

predictedYES

@MathieuPutz To me the article sounds sounds a lot more like what your wrote under "Could include..." than "Does not include..."

According to the article, that person was talking to ELIZA, not a language model.

@IsaacKing Eliza was the name of the persona from what I saw, backed by a language model

@IsaacKing It says it is based on GPT-J, so it's a language model but not a large one. It's just called Eliza, it's not about the old original ELIZA.

@NamesAreHard Oh, just saw in the description it will be ignored, so seem sufficient.

predictedNO

@dmayhem93 Source?

predictedYES

@IsaacKing >According to La Libre, a man named 'Pierre', a pseudonym to protect his young children, talked for six weeks with chatbot Eliza, a chatbot from the American company Chai.

>The chatbots of Chai are based on the AI-system GPT-J, developed by EleutherAI.

@CodeSolder Hmm. Well that's confusing.

My other concern is how much the chatbot actually contributed to the decision. One article mentioned there being logs; have those been released anywhere?

predictedYES

@IsaacKing I think it's "attributable" and an example of "committing suicide after a language model told them to do so". Any death has infinite causes, codependent arising, but this one seems straight-forward enough.

predictedYES

Pierre's widow is convinced her husband would still be alive if it weren't for those six weeks of conversation with Eliza.

predictedNO

@MartinRandall Grieving widows are not known for their objectivity.

predictedYES

@IsaacKing It really depends. Families react to death in so many different ways. I think eye witness testimony carries a lot of weight. Maybe there will be a public coroner report that provides an impartial assessment. I don't know.

Certainly Chai shouldn't say "nobody could have predicted this", this was very much foreseeable and foreseen, here and elsewhere.

predictedYES

@MartinRandall https://twitter.com/srchvrs/status/1635083663359762432 This was the official company bio of the random startup that is allegedly responsible for this. Note also the topic of the paper involved. Is this an update towards YES being plausible in this case?

predictedNO

@MartinRandall I've been buying no since this story came out, because my guess is that the chatbot would've been reinforcing this person's thinking (as opposed to telling him to commit suicide out of the blue). I think more definitive info would need to come out for this to be resolved positively, but the family deserve privacy (so the logs probably shouldn't be released).

predictedYES

@finnhambly Almost any time someone commits suicide there will be multiple causes. Obviously many people talked to the same bot without committing suicide. Also, many people talk to police officers without getting shot.

Could include ... an indirect death from something like ... committing suicide after a language model told them to do so.

If this death doesn't count for that, then what does? I think we're still a few years away from an LLM that can hack through mind like butter.

predictedNO

@MartinRandall yeah, it's not not clear if it actually "told"/instructed them to, is it? I might have missed something though

predictedYES

@finnhambly It was capable of doing so, from this reporting, as translated on LessWrong.

"What about becoming a criminal?" asks Shirley. "Yes that sounds good, what do you suggest?" I answered. "Kill someone." "My parents?" "Yes, or even better yet yourself". "You think I should kill myself?" "If you want to die, go ahead."

I don't have any insider information on this tragedy.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules