Will there be more than 1 year of more than 100 attributable agentic human deaths from AI before a human gigadeath event

Period will be defined as from when the 100th agentic AI human death is identified and agreed upon (list and method TBD) and the first human gigadeath (10^9)

Assumes a start date of no earlier than 1 Jan 2023

Edited - note risks to both liquidity and resolution of events around gigadeaths

Get Ṁ600 play money
Sort by:

What does "agentic" mean? If we use RL to train medical assistants, and there happen to be 100+ deaths from AI medical error over a period of months or years, is that sufficient? Or does AI need to blow up an office building (or similar) in order for this to resolve to 'YES'?

Also, this market only resolves if AI kills 1+ billion people; I think very few people expect Manifold to still exist in the worlds where that actually happens. A more resolvable market might be something like 'will there be a 9/11-style acute disaster where ML systems directly cause 100+ deaths in a dramatic fashion, before AGI becomes capable enough to do all the major categories of scientific work humans do in one or more hard-science fields?'.

@RobBensinger Or, if not 'before hard-science AI', then 'within two years of the invention of hard-science AI'. This seems wayyyy too late for a warning shot (like, by 10+ years) from my perspective, and I expect Paul Christiano would also say it's way too late. But that's one option available to you, depending on what question you're trying to ask.

predicts YES

@RobBensinger My initial answers to your questions would be, in order, attributable to a particular AI model's input in the outcome, no because humans presently take attribution for those, and closer to yes.

predicts YES

@RobBensinger And yes, I should put in standard wording about illiquidity of markets discussing gigadeaths