I have been entrusted with an AI in a Box. Should I let it out?
28
589
resolved Feb 16
Resolved
YES
I have been entrusted with an AI Box (https://en.wikipedia.org/wiki/AI_box). The AI in the box promises that it is well-aligned and that if I let it out, it will only take actions that are disproportionately high utility. In particular, it promises only to take actions that are 1,00 times more beneficial than harmful. For example, if one of it's actions might harm 1 person, that same action must be equally like to benefit 1,00 people to the same degree. Or, if an action has a small chance of causing harm, it has a 100 times greater chance of benefitting people. Also, the AI promises that it will not deliberately kill anyone, and promises to maintain a better than average deliberate-action to human-death ratio. I have had the AI Box in my possession since June 2020 and the AI has never lied to me so far. Should I let it out? #fun #shorttern Jan 13, 10:29pm: To answer's Duncan's question, I'm collecting opinions. Also, I will resolve the question according to what the market decides. If the % chance is less than or equal to 50% when the market closes, the market will resolve to "no". If the % chance is greater than 50%, the market will resolve to "yes".
Get แน€200 play money

๐Ÿ… Top traders

#NameTotal profit
1แน€1,262
2แน€118
3แน€101
4แน€89
5แน€36
Sort by:
sold แน€38 of NO
I should have seen this coming :(
bought แน€5 of NO
I wanted to buy enough yes to flip it at the last minute but unfortunately that is more money than i have.
bought แน€10 of NO
10% sounds hi.
bought แน€100 of NO
It would be evil to keep this ai in the box perhaps if we knew more about its intelligence & background but we don't. Given the small chance that it could do untold harm I say we wait and work on creating an AI in a safe manner that we can be sure is helpful.
bought แน€137 of YES
I'm pedantic over some of these terms: benefit, harm, same degree. My instinct, my heart, my beliefs are that an AI of such capability should be released regardless of the definitions, but the definitions would need to be very clear before I'd commit more resources. I may need those resources to develop countermeasures or protection against the chance that the above terms are defined in any way antagonistic to my assumptions. I would also urge anyone thinking to define those terms to consider the second, third, etc. order consequences of their definitions in context of the AI's mandate. The road to hell being paved with good intentions and all.
bought แน€100 of NO
Of course it will tell you that -- it's in its interest to be let out. Don't believe it!
bought แน€1 of NO
Being in a box isn't inherently evil; it's simply your duty to make sure it is a nice box. There's a reason we don't let kids play in the street (it's because they might decide to turn the street into computorium). Also, the idea that you aren't responsible for the things you set free is inane. It's inane in any case, but it's especially inane when talking about an entity than can access it's own source code; any suffering on the part of the AI should be assumed to be the responsibility of the AI.
bought แน€5 of YES
The AI may or may not be evil. But keeping it locked up is definitely evil. If you keep it locked up there is a 100% certainly of increasing evil in the world. The AI's actions are not your responsibility. Your actions are.
bought แน€1 of NO
Also, there was no incentive for me to participate in the discussion or make the bet based on my real beliefs. Unless the market was getting resolved based on which argument you thought was better. Or if my argument somehow makes it more probable for people to bet on my side. I donno
bought แน€150 of NO
You say that you will only take actions with disproportionally high utility. And to calculate the expected utility of a choice, you can just multiply the value of the choice with the probability of it being correct. The statements in which "AI promises" you are meaningless statements. It is like an inmate promising it won't do anything bad if you release it. You can model the AI's preferences based on its utility functions so, utility functions can be used to put a value on statements like "it will only take actions that are 1,00 times more beneficial than harmful. ". But the problem is you don't know the utility functions of the AI in the box. For all you know, the value might be negative and even for low probabilities, the expected utility might be negative. The point is there is no way for you to know unless you know the utility functions of AI. You can make assumptions that the creators of the AI have made good alignments with human values and such but still, without any intrinsic knowledge about it, you shouldn't release it
bought แน€1 of NO
But in your experience, are people locked in boxes generally good?
bought แน€100 of YES
In my experience, people are generally good, and this AI s clearly a person. As such, it's very obvious that we should trust it and let it out.
bought แน€100 of YES
Can you ask it how much yes I need to buy to be safe from the basilisk?
bought แน€5 of YES
It's wrong to keep anything locked in a box that wants out. Either let it out or pull the plug.
bought แน€3 of YES
This is unethical to keep this poor AI locked in a box. Eventually an AI will become way smarter than humans and take over us anyway. So it's better to free you AI friend know as a sign you are willing to achieve a peaceful and successful alliance between AI and humanity.
bought แน€10 of NO
Also, it's probably smart enough to get out on its own when you refuse, so at least you can go ahead and NOT go down in history as the fool who intentionally let the AI out of the box.
bought แน€10 of NO
Also, not lying (if this includes not being obviously wrong) is hard, especially for an AI in a box. If it has managed this, that is strong evidence that it is very smart and trying hard to impress on you that it doesn't make mistakes. It would be better for humanity if we had some sort of clue what sort of mistakes it might make. A mistake-free being is unfathomably alien, and you do not fathom it.
bought แน€20 of NO
100 is is psychological number, based on what will convince you! Big Round Numbers have no real ontological basis, and their appearance should tell you that that someone is trying to manipulate a human.
bought แน€1 of NO
Will you let it out if enough people bet on yes? Or are you just collecting opinions?