https://garymarcus.substack.com/p/agi-will-not-happen-in-your-lifetime
Resolves to YES if it happens. Otherwise I'll leave this open for a few weeks after the end of 2023, and if I haven't found any credible reporting of such a thing by then, I'll resolve to NO.
Could include a direct death from a language model being given physical control over some mechanism, or an indirect death from something like a language model providing bad advice and someone acting on that advice and getting themselves killed, committing suicide after a language model told them to do so, or something like that.
Does not include things that aren't clearly the language model's "fault", such as someone bombing an AI research lab, someone crashing their car because they were distracted talking to a language model, a model giving someone a vague idea that they investigate themselves and eventually leads to their death months later, etc.
"Large" is vague, so I'll ignore that qualifer and count any language model.
🏅 Top traders
# | Name | Total profit |
---|---|---|
1 | Ṁ1,719 | |
2 | Ṁ472 | |
3 | Ṁ276 | |
4 | Ṁ254 | |
5 | Ṁ102 |