It is what it sounds like. I don't need to be picky about adjudication criteria. We'll know it if we see it. (God forbid.)
This is a fix for the x-risk shortcoming in these markets. If the claim only resolves as YES if all the humans are dead, then there may not be anyone around to settle the claim. These should trade at 0% or 100% before time discounting.
https://manifold.markets/MartinRandall/will-ai-wipe-out-humanity-before-th-d8733b2114a8
https://manifold.markets/EliezerYudkowsky/will-ai-wipe-out-humanity-by-2030-r
https://manifold.markets/jack/will-humanity-go-extinct-before-203
Most cases where this happens are cases where AI kills everyone. This is the relatively narrower and not very likely-seeming case of AI killing half a billion people without disrupting Manifold settlement.
This seems most likely to happen if somebody uses non-superintelligent AI to build a pandemic. Would that even count?
@EliezerYudkowsky Can you elaborate a bit more on your reasoning for why the number of deaths is bimodally distributed across the range of outcomes? Which parameter(s) of the model have this thresholding behavior where AI suddenly goes from “kills no-one” to “kills everyone”?