Will the responder one-box in this Transparent Newcomb's Paradox?
176
195
2K
resolved Jan 19
Resolved
YES

Newcomb's paradox is a decision theory thought experiment in which there are two boxes; Box 1 is transparent and contains $1,000, and Box 2 contains $1,000,000 iff the reliable predictor predicts that the other participant will take only Box 2.

In our case, our reliable predictor will be Manifold. Box 2 will contain a manalink for Ṁ1000, multiplied by the probability that this market assigns to YES (e.g. at 50% YES, Box 2 contains Ṁ500). Box 1 contains Ṁ100.

I will select a random participant in this market with fewer than 100 shares in this market at close time.

I will offer that participant the choice of only Box 2, or of both Boxes 1 and 2. This market resolves YES if they take only Box 2, and NO otherwise. If the random participant doesn't make their choice within 24 hours, I will randomly select another Manifolder to replace them.

Get Ṁ200 play money

🏅 Top traders

#NameTotal profit
1Ṁ269
2Ṁ191
3Ṁ166
4Ṁ102
5Ṁ102
Sort by:
predicted YES

My anonymized version of this (with M$1000 vs M$1 in the boxes) resulted in prediction of 75% and a one-box decision: /jack/will-the-responder-onebox-in-this-n-6b2e3bebaf7a

And here's another version that has different box amounts:

Manifold in the wild: A Tweet by Xeniamorph 🔍

@diviacaroline I think this is the only market I've bet on, actually https://manifold.markets/Tetraspace/will-the-responder-onebox-in-this-t

Manifold in the wild: A Tweet by pandora's curse 🖤🏂

I made M$85 in profit trading on 'Will the responder one-box in this Transparent Newcomb's Paradox?'! https://manifold.markets/Tetraspace/will-the-responder-onebox-in-this-t?r=c3Bvb2t5bWFjaGluZQ

Third page, eighth user

@NuñoSempere, one-box or two-box?

predicted YES

@Tetraspace One box.

predicted YES

@Tetraspace

Notes for people following this:
- This answer was partially out of fun, partially out of the reasons I outline here: <https://manifold.markets/Tetraspace/will-the-responder-onebox-in-this-t#L4HV1BOh8BieoBiS3CKR>

  • On that topic, if YES bettors want to bribe me after the fact, you can send me a mana-link <https://manifold.markets/links> e.g., on twitter (@NunoSempere)

    • Note that this only works for YES bettors, I couldn't ask NO bettors for retroactive bribes, since they don't believe in this acausal shit.

- I also bet on <https://manifold.markets/LudwigBald/will-the-responder-onebox-in-the-ot> to exploit knowing my result, but would have one-boxed anyways.


Overall I found this heaps on fun, and in particular trying to figure out whether the acausal incentive here was/wasn't like in Newcomb's problem (it isn't, it's much weaker). thanks a lot to all!

predicted YES

I sent a 5 Mana post-hoc bribe.

predicted NO

@Tetraspace if you want to bribe me after the fact for driving the price lower and reducing the amount in box 2, I also believe in acausal shit.

@Tetraspace your random number is: 88

Salt: UfSKqSnkKSQS9qOUD88Z, round: 2623093 (signature 8c03566595b5eabb0f0b60485c98d810a48e38c924f6d0a7a700e0f71ff5b1453e539b0a7fa5bae690ebcea1e386c6ca0055c8ee3cbaf3f1f9526251ff5826b1b26827a3b5c5ba2afb50329a01673632e810b2ab1c4245e903c93fc87e647ff6)

@Tetraspace you asked for a random integer between 1 and 161, inclusive. Coming up shortly!

Source: GitHub, previous round: 2623091 (latest), offset: 2, selected round: 2623093, salt: UfSKqSnkKSQS9qOUD88Z.

@Tetraspace Oops, need to add it to the group.

Alright, let’s go! There are 161 users on this market. On my phone, there are 40 on a page; 20 YES and 20 NO. So my numbering is gonna be 1-20 first page yes, 21-40 first page no, etc. When one of the columns runs out, all the numbers are allocated to the remaining one.

If the user is invalid (a bot, has more than 100 shares), I’ll just reroll.

sold Ṁ125 of NO

Is there going to be any need of vesting shares like in Jack's market?

@RogerYang Nope, just the 100 limit in this market

bought Ṁ91 of NO

Lots of people talking about the virtue of acausally cooperating with other bettors, but what about the benefits of acausally cooperating with Tetraspace by betting NO and committing to two box?

predicted YES

I created another instance of this problem with modifications to make it more like the original Newcomb's problem:

Specifically box B contains 1000x the amount of box A, and Manifold will predict the decision of the player after their identity is known, instead of predicting the decision of a random player before they are selected.

predicted NO

I will two-box and the incentives after the fact will be for others to two-box, regardless of how they predicted. This is different to the actual Newcomb's paradox in which I would one-box.

sold Ṁ28 of NO

I want to get an unbiased estimate of whether the responder would one-box, so I made this other market: (note that this is not necessarily an arbitrage opportunity)

predicted NO

@LudwigBald Doesn't the existence of this market function as another bribe for one-boxing? The original market won't ask people who bet over Ṁ100, so that they have no incentive to one-box just to win the bet, but this effectively allows someone to put enough shares in this market that they're incentivized to one-box.

Still thinking no, since most people won't precommit to one-boxin/follow through/read the comments to find the bribe offers/come up with a system to profit more off of a YES.

predicted NO
predicted YES

At the moment, the number of users betting "yes" easily outnumbers the number of users betting "no." (This is even more the case if we remember to eliminate people with >100 shares from consideration as per the resolution criteria.) If we assume that all the yes bettors care more about their betting history and being right than they do about their total amount of mana, it seems more likely that - if the selection was done right now, at least - a randomly chosen participant would one-box to prove themselves right (just because the yes bettors outnumber the no bettors at the moment). I know it's not guaranteed that a yes bettor would want to prove themselves right, but I for one wouldn't want to make myself wrong in a bet just to get a bit more mana. (I also like the idea of making a higher number of users correct [and therefore probably happy], which *also* makes me want to one-box if chosen.)

predicted NO

@mukimameaddict Interesting assumption there. If people only cared about their final right/wrong binary records I imagine they'd only make 1 mana bets. More mana implies more money, more activity, more opportunitism, or (centrally) more continuously correctly-calibrated confidence in predictions

come on guys

predicted NO

The incentive for one-boxing is much lower here than it is in the typical Newcomb problem. Newcomb's problem relies on the fact that the second box potentially contains way more money than the first box, and that the predictor is very likely to be accurate, to ensure that the expected utility of one-boxing is higher than the expected utility of two-boxing. Relax these factors too much, and this is no longer the case. Now, the second box will only contain about 6 times as much as the first box, depending on what the final probability ends up being, and Manifold's "reliable prediction" clearly isn't that reliable - it has a confidence only a little above 50%.

To make things even worse, the amount of money in the second box, rather than being based on a binary decision, is based on a probability. That makes the benefit of one-boxing even smaller, since you would still expect to get plenty of money from the second box even if you two-boxed.

A third difference is that we don't know who the participant will be. Even if we were 100% certain that one, specific person would two-box, that wouldn't really change the odds by much, and therefore, wouldn't change how much money is in the box. That means that each individual person doesn't really have any incentive to one-box unless they think everyone else's decisions are strongly correlated with their own.

Also, the most straightforward interpretation of expected utility theory says that you should two-box when the boxes are transparent. This doesn't necessarily mean the participant will two-box, since they may follow a decision theory that allows for precommitments, but it is yet another factor making it less likely that the participant will two-box here than in the regular Newcomb scenario.