-"Does": the main cause must have been the AI, not external circumstance.
-"an": must be a single event (we are not combining unrelated smaller disasters).
-"AI": must be a system (or multiple systems) that relied on artificial intelligence for its decisions.
-"disaster": the AI must've done something that most smart people would say is malicious, reckless, or poor judgement. Unless the AI was directly following orders by a human who we agree was authorized to give those orders, then that doesn't count.
-"kill": they must have deceased, not merely been injured/poisoned/etc.
-"at least 1,000,000,000": if the disaster causes ongoing deaths (such as by poisoning the atmosphere) then we will include the ongoing deaths, with the best available estimate.
-"Before 2029": resolves No in 2029-Jan if the above hasn't happened, otherwise resolves Yes whenever there is a consensus that it happened.
@ScroogeMcDuck I appreciate this series of markets. Great way to bring some clarity to what people are trying to say when they make certain claims.
we're not including mistakes that a human might've also done, if they were in the same position. The AI must've done something that most smart people would say is reckless, malicious, or poor judgement.
A human might also be reckless, malicious, or poor judgment. If we're benchmarking against humans we need to include various human genocides throughout history as things that humans have done.
@MartinRandall Maybe I should just simplify this down to "...that most smart people say was reckless/malicious/poor judgement."
I had in mind the AI making mistakes that were "honest" mistakes which the humans didn't know either. E.g. maybe eradicating mosquitos, then suppose if the ecological costs were much worse than people expected, then humans blame the AI.
But it might be too complicated to try and tazonomize those in advance. I'll use my personal judgement, and shrink this clause down to just "if a poll of smart people would say it was reckless/malicious/poor judgement, then it counts."
@MartinRandall For the 1bn market specifically, yes I have a high chance of being unable to resolve this market and rewarding traders or pricing in the information. Less so for the others, in a proportional manner. We'll still applying this decision to all the markets in this series. Also, don't be so confident that 1bn deaths by AI is such a guarantee I'm dead, you're probably overconfident.
@ScroogeMcDuck >1bn is >10%, so ignoring AI specifics it's maybe 30% chance of death. Eg, 1918 flu is the most recent thing that comes to mind.
I think AI specifics increases the chance some on top of that. Plus you could die in some other way before then.
Maybe you are particularly well suited to survive AI disasters. Still, I think a market like this works better with objective resolution criteria to avoid anthropic bias due to creator death.
Less serious for the other markets in the series.
@MartinRandall This is why I haven't bothered adding the "all humans" or "8bn" etc. Went up to 1bn and stopped, because I agree the anthropic bias kind of ruins it if you go any higher. My aim is for the "slope" to be informative, even if we can't accurately pick out the "all die" odds.