Upon an AI running uncontrolled for over a month, and showing significant real-world effects (including market crashes, server crashes or internet denial, curing cancer, etc.) will at least 1,000,000 humans die as a fairly direct result by the end of 60 days post discovery (specifically, the time that we notice the AI running wild, not the development of the code or the actual breakout), and from the actions of the AI and not a human response (e.g., the US nuking Moscow to take out key military servers is all on the humans, not the AI)? Uncontrolled in this case will mean not fully contained. This will be a judgement call on my part.
Any Manifold market admin may resolve this on my behalf if the conditions are met and there is reason to believe that I am dead.
@MartinRandall actually I don't think that's terribly likely, I think that much more likely is that by the time we are having AI breakouts we are fairly good at repelling them. we can train AI to defend against them from the get-go and we are going to have to do that with haste, but I think that the powers that be already know that. The question is now how to describe the cooperation groups of the world in a way that allows the cooperation groups to keep their separation as needed but join in co-protection as needed.