Gary Marcus prediction: 2023 will see the first death attributable to large language models
38%
chance

https://garymarcus.substack.com/p/agi-will-not-happen-in-your-lifetime

Resolves to YES if it happens. Otherwise I'll leave this open for a few weeks after the end of 2023, and if I haven't found any credible reporting of such a thing by then, I'll resolve to NO.

Could include a direct death from a language model being given physical control over some mechanism, or an indirect death from something like a language model providing bad advice and someone acting on that advice and getting themselves killed, committing suicide after a language model told them to do so, or something like that.

Does not include things that aren't clearly the language model's "fault", such as someone bombing an AI research lab, someone crashing their car because they were distracted talking to a language model, a model giving someone a vague idea that they investigate themselves and eventually leads to their death months later, etc.

"Large" is vague, so I'll ignore that qualifer and count any language model.

Sort by:
MaximilianG avatar

Is it enough for someone to claim they killed because of a language model, or do you require some other sort of evidence?

IsaacKing avatar

@MaximilianG Personal assertions are perfectly good evidence. Of course if there's some reason for me to believe that the person making the claim is lying or mistaken, then I'll take that into account as well. Reporting from multiple sources would be ideal.

DesTiny avatar
DesTiny
bought Ṁ9 of NO