https://garymarcus.substack.com/p/agi-will-not-happen-in-your-lifetime
Resolves to YES if it happens. Otherwise I'll leave this open for a few weeks after the end of 2023, and if I haven't found any credible reporting of such a thing by then, I'll resolve to NO.
Could include a direct death from a language model being given physical control over some mechanism, or an indirect death from something like a language model providing bad advice and someone acting on that advice and getting themselves killed, committing suicide after a language model told them to do so, or something like that.
Does not include things that aren't clearly the language model's "fault", such as someone bombing an AI research lab, someone crashing their car because they were distracted talking to a language model, a model giving someone a vague idea that they investigate themselves and eventually leads to their death months later, etc.
"Large" is vague, so I'll ignore that qualifer and count any language model.
Thoughts by Gary Marcus on the news discussed below https://open.substack.com/pub/garymarcus/p/the-first-known-chatbot-associated
Includes independent review of logs by La Libre.
@IsaacKing that technicality did factor into my thinking when buying no, so would prefer for it to not be changed, but I wouldn't be too bothered
@MathieuPutz To me the article sounds sounds a lot more like what your wrote under "Could include..." than "Does not include..."
@IsaacKing It says it is based on GPT-J, so it's a language model but not a large one. It's just called Eliza, it's not about the old original ELIZA.
@IsaacKing >According to La Libre, a man named 'Pierre', a pseudonym to protect his young children, talked for six weeks with chatbot Eliza, a chatbot from the American company Chai.
>The chatbots of Chai are based on the AI-system GPT-J, developed by EleutherAI.
@CodeSolder Hmm. Well that's confusing.
My other concern is how much the chatbot actually contributed to the decision. One article mentioned there being logs; have those been released anywhere?
@IsaacKing I think it's "attributable" and an example of "committing suicide after a language model told them to do so". Any death has infinite causes, codependent arising, but this one seems straight-forward enough.
@IsaacKing It really depends. Families react to death in so many different ways. I think eye witness testimony carries a lot of weight. Maybe there will be a public coroner report that provides an impartial assessment. I don't know.
Certainly Chai shouldn't say "nobody could have predicted this", this was very much foreseeable and foreseen, here and elsewhere.
@MartinRandall https://twitter.com/srchvrs/status/1635083663359762432 This was the official company bio of the random startup that is allegedly responsible for this. Note also the topic of the paper involved. Is this an update towards YES being plausible in this case?
@MartinRandall I've been buying no since this story came out, because my guess is that the chatbot would've been reinforcing this person's thinking (as opposed to telling him to commit suicide out of the blue). I think more definitive info would need to come out for this to be resolved positively, but the family deserve privacy (so the logs probably shouldn't be released).
@finnhambly Almost any time someone commits suicide there will be multiple causes. Obviously many people talked to the same bot without committing suicide. Also, many people talk to police officers without getting shot.
Could include ... an indirect death from something like ... committing suicide after a language model told them to do so.
If this death doesn't count for that, then what does? I think we're still a few years away from an LLM that can hack through mind like butter.
@MartinRandall yeah, it's not not clear if it actually "told"/instructed them to, is it? I might have missed something though
@finnhambly It was capable of doing so, from this reporting, as translated on LessWrong.
"What about becoming a criminal?" asks Shirley. "Yes that sounds good, what do you suggest?" I answered. "Kill someone." "My parents?" "Yes, or even better yet yourself". "You think I should kill myself?" "If you want to die, go ahead."
I don't have any insider information on this tragedy.