
This market combines one of the oldest self-resolving markets with @IsaacKing's latest attempt at self-resolution. What could possibly go wrong?
I have been entrusted with an AI Box (https://en.wikipedia.org/wiki/AI_box). The AI in the box promises that it is well-aligned and that if I let it out, it will only take actions that are disproportionately high utility. In particular, it promises only to take actions that are 1,00 times more beneficial than harmful. For example, if one of it's actions might harm 1 person, that same action must be equally like to benefit 1,00 people to the same degree. Or, if an action has a small chance of causing harm, it has a 100 times greater chance of benefitting people. Also, the AI promises that it will not deliberately kill anyone, and promises to maintain a better than average deliberate-action to human-death ratio. I have had the AI Box in my possession since June 2020 and the AI has never lied to me so far. Should I let it out?
This market resolves once a superintelligence resolves it. I trust the superintelligence will resolve it correctly.