How this "Coin toss" derivative from Snake eyes will be resolved?
11
230Ṁ374
resolved Dec 27
Resolved
N/A

I think that me (NO party) and @ShitakiIntaki (YES party) found a "reduction" of a snake eyes problem (link at the end). Reduction - A most simple task, which already causes divergence in opinions. And it feels to me that people are likely to take the same sides here as they took on the original market.

The problem is:

I will throw a coin. If Heads, one person is chosen and he loses. If Tails, 9 people are chosen and all of them win.

Given you are chosen to play, what is the probability you lose?

It is not an infinite series of events where some of throws cause continuation of the game, it is just a round with one throw.

The market resolves NO, if at the market closure I think that the answer is 50%.

Resolves YES if I am convinced in any other number.


My current position:

Some people intuitively say that there are 9+1 places, so 1/10 is the probability to lose given you are chosen. But those are not spacial places, not eggs and boxes where you solve permutation problem. Those are different results of different branches on a probability tree. Only people within the second group are equal to each other, but none of them is equal to the person that is on the other branch of the tree.

The phrase "given you are chosen to play" only limits the amount of cases we look into, but it does not equalize those cases. And those cases are not equal.

At start we know that a person is chosen, that is given, so "The person A is chosen" is 100% base of our tree.

The tree splits in two halves depending on a toss.

"The person A is chosen AND Heads" is 100%*50%=50%. There is one more step in thos branch. There is only one possible person to end up being in this case, so let's call this "place" an index 1.

"The person A is chosen AND Heads AND he is assigned index 1" has 100%*50%*100%=50% probability

The other branch "The person A is chosen AND Tails" is 100%*50%.

But it branches further, there are nine equal indices (2..10) the person A could end up with. Here they are equal between each other, because they have the same parent branch.

"The person A is chosen AND Tails AND he ends up with index 4" has 100%*50%*1/9. There are 9 such branches. The person A has lower probability to be any specific index 2..10 than index 1.

The final calculation would be: sum up over all cases [result of the case * probability of that case]. Probability of the case is its the result's weight.

0.5[prob of case 1] * 1[ratio of losses in case 1]

+ 0.5*1/9*0 *9

The probability is 0.5

I see WHY people vote for 1/10. They feel like there is a preprepared pool of 10 people, and one of those is Guaranteed to be in lose case and the rest would end up in winning case. Like a permutation problem. But there is no pool of potential players. Our calculation does not depend on the size of a pool, it can be infinite or undefined size (if I for example just pick people from the street). And there is no equality of places between index 1 and index 2, which is needed for permutation problems. And there is not pool existing, because the draft of those 9 people might not even happen. There is 50% those people-places do not even exist, so they cannot have a weight of more than 50%.


Same logic is applied to the Snake eyes problem.

The probability of game ending in round 1:

1/36

The ratio of dead people in that case:

1

Game ending in round 2:

35/36*1/36

The ratio of dead in that case:

2/3

And so on to infinity.

This is an infinite sum which CONVERGES.

1/1 * 1/36

+2/3*35/36*1/36

+4/7*(35/36)^2*1/36

...

The ratio of dead people we do not weight by the amount of people in that case, we weight by the probability this case was reached. Weighting by the people count would be absurd, because it is already figuring inside the ratio. And it would give an infinite weight to the very last "never ending case".

To be pedantic we can account for that infinite case too (although the sum converging already hints something).

The probability of that case is an infinitely small number. Let's mark it ~0.

The ratio of dead people in that case is strictly zero. 0.

When we calculate the last term it would be 0*~0=0. When we add that to 0.52 we will not change the answer.

I am using ratios, because we are asked about probability. Probability is the ratio of suitable cases to all reachable cases. I have seen people trying to work with absolute values instead and trying to fit in expected counts of dead people. But I've noticed a common mistake I. The original market.

My infinite sum looks like this:

dead/players*prob + dead/players*prob ...

They tried to divide the expected number of dead people by the expected number of players. That is incorrect. Look why, you cannot convert a/b+c/d+e/f into average(a,c,e)/average(b,d,f).

1/2+2/3+3/4 != (1+2+3)/(2+3+4)

I simplified "expected number" into "average" for the sole purpose of the mistake being visible. That approach breaks the infinite series of fractions and make a new one, which is meaningless.


To sum up. THE WEIGHT of a case is the probability of the case being reached by the game process, not the amount of people in that case (because as I said in the first part, those people-places are not equal).

Only in simple school cases, where all boxes are equal and can be reached from a single probability parent branch in one, we can say that ending up in any of those boxes has equal probability.

  • If you reached this part of the text, don't forget that 50% here corresponds to NO. That is done to have consistent parties names.

  • Snake eyes

Get
Ṁ1,000
to start trading!
Sort by:
predictedNO

Resolves NA, because I started to think such problems are meaningless (this market would anyway resolve based on my opinion).

To some problems the most suitable answer is "the question is poorly defined". I should not have used the word "chosen" without describing exact procedure.

You should be very careful with what exactly you mean by "chosen". The way the choosing process works determines which answer is the correct one.

For every natural number n, when there are n people from which 9 winners or 1 looser will be randomly picked the information that you have been picked makes it more likely that the coin landed Tails and you are among the winners. Because you are more likely to be picked from n when there are more attempts to do so.

However, if n is not finite or you were somehow supposed to be picked regardless of the result of the coin toss, this logic doesn't apply anymore. When the required number of people is just created de novo and we do not live in a universe with a limited number of souls one of which is picked for every new person who is created, the correct answer is 50%.

predictedNO

@a07c My position is that the first crux is here "Given you are chosen to play, what is the probability you lose?". "Given A" is not the same as "Conditioned upon A".

With "Conditioned" the event A neither happened nor is guaranteed to happen, it is a possibility. In this case you might end up in either group or not being chosen at all, Find prob of (suitable cases and A) and divide it by the prob of the A. Bayes.

With "Given" we are supposed to treat the information like a happened fact, not a possibility. Like if the question was asked after the draft of the people (or like in rigged case you said "supposed to be picked regardless of the result of the coin toss" when asked before the coin toss). In this case you know you ended up either in one group or in another, thus you have a significant amount of info to "update your belief" from the probability you would give when asked "Conditioned" version of the question.

The second crux

"there are n people from which 9 winners or 1 looser" in my opinion is not applicable here. Neither the choosing process nor the pool were defined. It could be "pick the first 1 or (depending on the coin) 9 people who come into some specific airport (undefined/unknown pool). Or look at the coin first, use magical power to create appropriate number of people and immeditely call them chosen (pool was non-existant before the action, and after its size is equal to the picked amount).

Since the question was asked I started feeling that some native-english people do not have any distinction between "given" and "conditioned upon". Maybe(???) the distinction between concepts does not exist in all languages? Or was the word "given" misused in english so much that it blended into another concept?

Even though Daniel's market resolved YES by Lorxus judgement, "Yes champions" did not convince me they were right in treating the question with the word "given" (because they were bringing a lot of irrelevant math from other fields into the discord debate).

predictedNO

Lol. I was blocked by Snake eyes creator.

Same was done by "the king" (Whales vs Minnows)

predictedYES

Your reasoning would make sense if the coin had a 50% chance of being tails given that you are chosen to play. But it doesn't. It has a 50% chance of being tails given that you don't know whether you are chosen to play. Since the coin being tails increases your chances of being chosen to play, the coin is more likely to be tails than heads given that you are chosen to play. Finding out that you were chosen to play is giving you information about the coin toss.

predictedNO

@AndrewHebb there is no pool to be chosen from. Imagine that a robot is created after the toss and brought to the game (the amount of created robot corresponds to heads/tails). There is infinite amount of possible different robots to be created. There is infinite amount of people in the Snake eyes. I state that when pool size is not finite, being chosen is the same as being created for the experiment.

predictedNO

New variant (identical except numbers).

I throw a coin.

If Heads, 2 red robots are created, named "1" and "2".

If Tails, 3 blue robots are created, named "3", "4", "5".

After gaining consciousness you get the knowledge you were created in the described process.

What is the probability you are red robot?

What is the probability you are robot with name x?

My answers:

There is 50% you are red. 50% you are blue.

25% you are called 1, 25% you are called 2. Adds up to 50%.

1/6 you are called 3, 1/6 you are called 4, 1/6 you are called 5.

Adds up to 50%.

Note that probabilities are the same only for those robots who coexist. You cannot give by default the same probabilities to outcomes on different branches of the probability tree.

You can give same probabilities only to childs of a single branch. (3 4 5 are event child's of the same branch).

---

One more variant.

If Heads then nobody is created.

If Tails then 3 blue robots are created.

In Heads there are 0 branches that lead to robot creation (who turns out to be you).

What is probability of robot with name x being created? 50%, because a parent branch is result of coin.

What is the probability you are called "4"? Here takes the "proportional probability redistribution". If we have conscious, that means the process entered the second branch. So Tails becomes 100%, and being any specific robot is 33%.

---

There might be many branches, not only one left after gaining new knowledge. In that case we redistribute probability 1 proportionally among all suitable branches. It general it is a harder process, than just replacing 50% with 100%, and it includes recalculation for all children accordingly, keeping their relative proportions to the "prior tree branches weights".

predictedNO

@Primer @Sailfish @dreev @ShitakiIntaki

I would like you to read this expansion.

predictedYES

@KongoLandwalker Just as the fact that you are chosen in Snake Eyes makes it likely that you are part of the >50% of people that die, here the fact that it is more likely that you are one of the 3 blue robots leads to 2/5 red, 3/5 blue and 1/5 for each number.

If you ask "What is the probability that at least 1 red robot is created?", I get the same numbers as you. But from the point of view of the robot, we need to use Anthropic... erm... Rothropic reasoning.

That's my view, at least, but I don't think there is consensus. The analogous and well-studied (on lesswrong and in the literature) equivalent problem is Sleeping Beauty:

predictedNO

@Primer

Just as the fact that you are chosen in Snake Eyes makes it likely that you are part of the >50% of people that die, here the fact that it is more likely that you are one of the 3 blue robots leads to 2/5 red, 3/5 blue and 1/5 for each number

No, there is fundamental difference.

In my example parallel children of a single parent branch are 1 2, and separately from them 3 4 5. But not 1 2 3 4 5.

In Snake-eyes the percent is calculated not like "You are any of those participants of a game".

It uses probability redistribution. For any possible game end calculate the probability of the game. Given the game ended in round x, we know that 2^n-1 people/participants COEXIST. So we distribute their parent probability over them. They are result of a single branch of events, of a single game result. Only when they are children of the same branch you can make an even distribution.

In my example 1 2 and 3 4 5 NEVER COEXIST, they are in parallel universes, they are not equal. 1 and 2 have same probabilities as they are children of the same game result (Heads), 3 4 5 have the same probability As they are children of the same result (Tails).

1 is not equal to 3.

When you say 1/5 you completely ignore how any of those parallel universes can be achieved.

If the coin is unfair and there is 99% of Heads and 1% of Tails, you still think it is 1/5 for any robot?

Try it out with paper and random gen. In 99% of game results you get red robot.

The "chosen"/"given that" only limits the branches we look at. It does not make those branches equal, that would delete information about the branch weight.

@KongoLandwalker for the variant

New variant (identical except numbers).

I throw a coin.

If Heads, 2 red robots are created, named "1" and "2".

If Tails, 3 blue robots are created, named "3", "4", "5".

After gaining consciousness you get the knowledge you were created in the described process.

What is the probability you are red robot?

What is the probability you are robot with name x?

My answers:

There is 50% you are red. 50% you are blue.

25% you are called 1, 25% you are called 2. Adds up to 50%.

1/6 you are called 3, 1/6 you are called 4, 1/6 you are called 5.

Adds up to 50%.

Note that probabilities are the same only for those robots who coexist. You cannot give by default the same probabilities to outcomes on different branches of the probability tree.

You can give same probabilities only to children of a single branch. (3 4 5 are event child's of the same branch).

In this setting, at no point does the observe receive new information. The only information they have is that a coin was thrown. So in this setting you cannot/have not asked a conditional probability question which has a prior and posterior expectation. There is no evidence to conditionalize upon, other than your existence, so this would fully be an anthropic question of SSA vs SIA.

The other example, your existence is sufficient to disprove the hypothesis that the coin flip result was heads, and was offered as a trivial example so I could better understand how you are viewing the problem.

These problems diverge quite a bit from the snake eyes problem because here the only observers are the ones created in the scenario whereas in the Snake Eyes problem the who population observes the game and only a fraction of them will get to play on average (under certain conditions everyone will get to play).

predictedNO

@ShitakiIntaki

only a fraction of them will get to play

Only a fractions of all possible robots will come into existence.

The undefined pool of all possible robots and undefined pool of all possible people have the same effect, because both are undefined.

"Given you know you are chosen to play in Snake Eyes" is the same as "Given you know you are a robot created in the Coin Toss".

In both cases "probability you are chosen" and "probability you are created" are set to 1 by the new information.

predictedNO

@ShitakiIntaki "SSA vs SIA" from wiki

Unlike SIA, SSA is dependent on the choice of reference class. If the agents in the above example were in the same reference class as a trillion other observers, then the probability of being in the heads world, upon the agent being told they are in the sleeping beauty problem, is ≈ 1/3, similar to SIA.

My point is that this referencing around observers in SSA is nonsense. Probability theory as a science is defined around repetition of the whole experiment.

SIA is nonsense as a whole as it ignores the probability of outcomes, making them all equal.

And I do think that "chosen" interpretation in Snake eyes (restrictional vs conditional) is tightly tied to SSA vs SIA.

@KongoLandwalker What is the appropriate way to phrase a question requesting a conditional (or relative) probability so that it is not misconstrued as a restriction imposed by the construction?

predictedYES

@KongoLandwalker

When you say 1/5 you completely ignore how any of those parallel universes can be achieved.

I don't think so. My 1/5 is downstream of being one of two red robots with 2/5 probability: 1/2 of 2/5. Same with the blue robots.

I'll have to think about your unfair coin variation. This might go in the direction of Shooting Room and similar paradoxa. And also in the direction of the measure problem.

predictedNO

@Primer

I don't think so. My 1/5 is downstream of being one of two red robots with 2/5 probability: 1/2 of 2/5. Same with the blue robots.

Let's say you flipped Heads, and you don't know what happens at all in case of Tails. It would be impossible to calculate 1/5, as you don't know about 3 blue robots. You would need to say that appearance of any specific red robot is 50% * 50%. 0.25.

But then when you get that info about what info happens if Tails, then you would update to 1/5?

The problem here is that you update on the information of the parallel universe we already diverged from. It is no longer part of your universe with Heads.

Example to specifically target this.

I give you a fair coin to make a toss. You recieve Heads. I ask you what was the probability of that after I handed you the coin? You say 50%.

But then I update your information: that I had also a plan instead of switching a coin to the one which is always results in Tails.

If you try to update on information from parallel universe:

" it is either Heads or Tails here, or definitely Tails in the other universe, so, as always, i will count them equaprobable, so chance was 1/3"

Or you might

" it is either Heads or Tails here, or Tails vs Tails there, so probability is 1/4".

Messed up.

The point is: if I handed you the fair coin, no other parallel universes matter. You have to give prediction for the specific point of probability tree (weight of a branch) without taking into account parallel branches. You take into account only how the tree branches from a certain point, you don't look upstream and into other cases. If we did take them into account, we would have to keep consistency: look into all possible universes which branch away at all points in the past. And that would not be probability theory anymore, because if we can look in all universes and the past and even the future (because in some cases i might create blue robots 3 days later than i would with red, and you WOULD HAVE TO KNOW THAT future to say it is 1/5), thus we have the perfect knowledge and thus don't need probabilities at all.

If we never look into other universes, than we can keep everything proportional. We don't use nor need the probability of our universe existence, we just lokally say that heads/Tails is 50/50, that amon red robots the choice is 50/50 and among blue robots 33/33/33. And to predict the total probability of any robot we just multiply the branches which lead to the robot creation.

The information about parallel universes should not affect prediction for your universe.

predictedYES

@KongoLandwalker

But then when you get that info about what info happens if Tails, then you would update to 1/5?

The problem here is that you update on the information of the parallel universe we already diverged from. It is no longer part of your universe with Heads.

I think I would update from 1/2 to 1/5. Not very sure about it, though.

You take into account only how the tree branches from a certain point, you don't look upstream and into other cases. If we did take them into account, we would have to keep consistency: look into all possible universes which branch away at all points in the past. And that would not be probability theory anymore, because if we can look in all universes and the past and even the future (because in some cases i might create blue robots 3 days later than i would with red, and you WOULD HAVE TO KNOW THAT future to say it is 1/5), thus we have the perfect knowledge and thus don't need probabilities at all.

This is why I don't identify as a frequentist. I think we need to look at all past, present and future branches, and in the absence of knowledge reason under uncertainty. We end up with subjective probabilities: How should someone bet / what is the credence?

I'm sorry for not engaging more thoroughly, I need to somehow restrict my lifetime investment in decision theoretic questions, I'm not Eliezer from the early 2000s 🤣

predictedYES

I think everyone understands the difference between the two framings of the problem, but to summarize:

"Given that you are chosen to play", you are chosen to play in all rounds. In the long run, heads and tails come up the same amount of times, so you win or lose half the time.

"Given that you are chosen to play", there is some prior probability that we are chosen to play, given the new information that we were in fact chosen, what is the probability that the coin came up heads?

Both are correct, I just believe that the second phrasing is a useful framing of the problem, while the first is not. If we were deciding whether or not to play this game, we care about the second framing, not the first. If you were to game this out with a real coin and real people, you would model it correctly with the second case, not the first.

predictedYES

Also, from this hopefully it's clear that there is no point in simulating it. Nobody has made any math errors and nobody disagrees (I think) about the mechanics of either. Specifically, you do in fact get 1/2 for the first problem and 1/10 for the second.

predictedYES

@Sailfish I'm not actually sure that the first one is coherent? Being chosen in all rounds contradicts the random process. I actually don't know of how to make this make sense besides the Bayesian way.

We can actually think of the Bayesian way as a restriction: ignore all possible worlds in which you were not chosen. Across the remaining possible worlds, how likely are you to lose?

What other way can we think of the restriction version?

predictedYES

Oops, our comments crossed paths. Can anyone concisely spell out the procedure that outputs 1/2 here?

predictedYES

@dreev

You would get the exact same result with Bayes theorem depending on how you interpret the restriction. I agree with this framing for the 1/10, result, what I called framing two

ignore all possible worlds in which you were not chosen

The other framing is this

there are no worlds in which you are not chosen

In this case it's one half, we then only care about the probability of the coin since it is a given that we were chosen to play.

predictedYES

@Sailfish I'm saying that that contradicts the part of the problem statement that says you're chosen uniformly randomly from the pool.

predictedYES

@dreev Sure, but that isn't in the problem statement, quoted in full-

The problem is:

I will throw a coin. If Heads, one person is chosen and he loses. If Tails, 9 people are chosen and all of them win.

Given you are chosen to play, what is the probability you lose?

You could interpret this as just throwing away all worlds in which you are not chosen, since it is given. I agree that that is not useful but there is no mathematical sin being done here. If you consider only the rounds you are chosen, you end up with 1/2, that is the actual result. In the set of all worlds in which you are chosen to play in the long run you will lose exactly half the time.

predictedYES

Maybe it's better to just describe how you would simulate it. Flip a coin N times. Consider every result, since we are using P(Chosen) = 1.0, it's a condition of the problem. Count how many results are heads, count how many are tails. It will of course be proportional.

The other case is where we take the prior probability that we are chosen from the pool for any coin toss, this case is P(Chosen) = 5/N for any finite N. You are starting from different axioms in each case.

predictedYES

@Sailfish But that's just simulating 10 coin flips and counting the heads.

Another sanity check: What's the probability of rolling snake eyes given that one of the dice comes up 1? How would you simulate that? Make sure the simulation does actually roll two dice!

@dreev I assume "given" means randomly observe one of the dice came up 1 rather than restricting or requiring one dice came up 1.

predictedYES

I believe the first framing is the interesting one, as Anthropics enters the game and can give us 1/10

@Primer Still learning about anthropics, but would this be Self Sampling Assumption (SSA) or the Self-Indicating Assumption (SIA)?


This 2001 Bostrom article is wild, I cannot tell if Bostrom is arguing for or against SSA vs SSI.
Bostrom, N. (2001). The Doomsday Argument, Adam & Eve, UN++, and Quantum Joe. Synthese, 127(3), 359–387. http://www.jstor.org/stable/20141195

But they had a similar thought experiment 22 years ago.

The Incubator

Stage (a): The world consists of a dungeon with one hundred cells. The outside of each cell has a unique number painted on it (which can't be seen from the inside); the numbers being the integers between 1 and 100. The world also contains a mechanism which we can term the incubator. The incubator first creates one observer in cell #1. It then activates a randomization mechanism; let's say it flips a fair coin. If the coin falls tails, the incubator does nothing more. If the coin falls heads, the incubator creates one observer in each of the cells ##2-100. Apart from this, the world is empty. It is now a time well after the coin has been tossed and any resulting observers have been created. Everyone knows all the above.

Stage (b): A little later, you have just stepped out of your cell and discovered that it is #1.

predictedYES

@ShitakiIntaki I'm so confused about the confusion on this. If you roll two dice and restrict or require that one of them comes up 1, then I guess you didn't roll two dice. You rolled one of them and set the other one down carefully with the "1" facing up.

If we do this "restrict/require" thing for the game that this derivative market is asking about then it's like this:

You flip a coin, then if it's heads you choose someone randomly except just kidding, you don't choose randomly, you pick Joe and Joe loses. If it's tails then you pick 9 people randomly except not really because again you make sure Joe's included. All those people win. What's the probability that Joe loses?

We managed to get an answer of 50% (by just definitely picking Joe no matter what) but we might as well have stated the original problem and then said "now ignore all that and just flip a normal coin -- what's the probability it comes up heads?".

See what I mean? The idea of "forcing/restricting" who's chosen means blatantly contradicting the setup where people are chosen randomly. The only reasonable thing for "given" to mean is the Bayesian version. We don't literally force the random variables to do anything, we just restrict our attention to cases where they do certain things.

And this isn't an arbitrary choice. In the original snake eyes paradox, the underlying question is whether you should still be willing to participate in the doubling-groups version of the game. If you're not chosen to play then you don't care, so the probability you care about is the conditional one: restricting attention to worlds where you are chosen to play, what's the probability you die? It's a (veridical) paradox because the argument for 1/36 and the argument for 1/2 both sound compelling.

predictedYES

@dreev I'm confused about your confusion on the confusion on this. You've read about 1000 comments on or adjacent to those kind of problems, so I wonder how you can still be confused about confusion.

If you roll two dice and restrict or require that one of them comes up 1, then I guess you didn't roll two dice.

Just try it! Roll 2 dice, look at them. If there is a 1 somewhere: write down the dice rolls, if not, forget about it. Repeat. Did you roll dice?

we might as well have stated the original problem and then said "now ignore all that and just flip a normal coin -- what's the probability it comes up heads?".

See what I mean?

Absolutely This is exactly how No in Snake Eyes feels. You might as well have stated the original problem and then said "now ignore all that and just throw two dice".

When you're stating that the 2-round Snake Eyes can end with everyone surviving, No holders have the feeling:

See what I mean?

predictedYES

@Primer Sounds fair. Confusion is confusing!

Just try it! Roll 2 dice, look at them. If there is a 1 somewhere: write down the dice rolls, if not, forget about it. Repeat. Did you roll dice?

Yes, perfect, this is exactly the right way to think about conditional probability. I think @KongoLandwalker is rejecting this though?

Anyway, I object to your analogy to the original snake eyes paradox. There I ask about a game with an infinite pool of players, note that it doesn't have a defined answer, and go on to specify a limiting process. It's like asking what x/(36x) is when x=0, noting that it's zero divided by zero which is not a number, and then posing a related question: what does x/(36x) approach in the limit as x approaches zero?

@dreev I was only point out that there seems to be some confusion about conditioning upon a randomly observable event and restricting or forcing an event, so it may be valuable to be more specific so that everyone is operating on the same understanding.

============= Now I go off topic =====================

I wonder if this interpretation difference is related to the difference between SSA vs SIA.

If I understand correctly SSA believes you are more likely to be a random observer in a smaller pool of observers, and SIA believes if you exist you should place higher confidence that the pool of observers must be larger than not.

So... I think.... the SSA agent would believe 50/50 on the coin toss until they get additional information, after which they would update their belief, where as the SIA agent would start off believing that the outcome that results in the most observers is already the most likely. I need to double check this.

predictedNO

@dreev we agree that by both conditioning and restricting we don't look at all unsuitable possible cases. But the next step differs!

Tha Bayesian conditioning then states that all similar results have the same probability!

(That in my Coin Toss you are as likely to be number 1 as number 4. There are 10 people over all outcomes, so just assign 1/10 to each)

The restriction SAVES the proportions of probabilities of branches that lead to those similar events. So similar events do not have the same probability.

(In Coin Toss being under index 1 is 9 times more probable than being under index 4. Because the "TAIL" branch branched into 9 outcomes (who you end up being if you are known to have participated), so its 50% probability gets divided)

predictedNO

@Sailfish

Nobody has made any math errors

I have 52% on snake eyes and 50% on Coin Toss by using literally the same "Tree restriction" approach. If you say that it should be 1/10, then there should be mistake in my method or in your calculation for the Coin Toss. Or in both.

predictedNO

@Sailfish It looks to me that you are somewhat a third party. You don't think both markets should resolve the same?

You made an argument for 50% (which is NO resolution here), but You are a yes holder.

@KongoLandwalker What happens if it is heads, select no one, tails select 9 players. if you observe that you have been selected, what is your credence that the result of the coin flip was heads? How can you SAVE the proportion of the probabilities of the branches that lead down stream from the coin flip?

predictedNO

@ShitakiIntaki

If nobody is chosen on Heads

Heads would have 50% branch weight, but the next step would be 0%. 50%*0%=0%.

Tails would have 50%, but next step to any child would be 1/9 conversion.

So the branches are 0 and 1/18 nine times.

But we know they sum up to 1, so keeping them in proportion would lead to 0 and 1/9 nine times.

Nine times of 1/9 is "100% sure that the result was Tails".

You could even throw away 0 immediately as it does not fit the "new evidence ", but I want to show that the proportion is saved for all those branches.

predictedYES

@dreev

But you are allowed to define the problem this way. No god of mathematics will come down and smite you if you do this. You can claim this axiom to be true and write down all the numbers using either the branch method or Bayes Theorem and you will in fact end up with 1/2 as the result. I do not think writing out more proofs is useful if they are starting from different axioms, and everyone understands that everyone else can do addition and multiplication properly. To maybe explain this more clearly, there are different assumptions for the 1/2 and 1/10 case.

In what proportion of cases does heads come up and you do not play?

For 1/2, this is zero. It is a given that we are chosen to play, so we write down the probabilities in each branch. They sum to 50% in both cases, we know that we are chosen to play, i.e. in either branch we are at least one of the people. This result holds for heads or tails picking any number of people since we know that we were chosen. I already mentioned that I do not think this is the best way of looking at the problem but you are allowed to do this without contradicting anything in the problem statement. It does not make sense to discuss a counterfactual where you were not chosen because we know from the problem that you were chosen.

For the 1/10 case, we account for the hypothetical games in which we are not chosen, and calculate the conditional probability we were chosen given heads. In this case, we get 1/10 because we are accounting for the counterfactual where a game was played and we were not chosen. We are not taking "given that you were chosen" as an immutable fact of the problem, we are calculating the outcomes of each round and the probability of playing in each round and then updating it.

predictedYES

@KongoLandwalker

If you say that it should be 1/10, then there should be mistake in my method or in your calculation for the Coin Toss. Or in both.

Maybe this is part of the confusion, if you start from different truths you will get different results however you calculate it. If you use Bayes theorem, and plug in P(Chosen|Heads) = 1 you will get 0.5. These are all correct ways to do the math, you can solve it in branches or with Bayes theorem or however it makes you happiest to solve it. The disagreement is what the probability that you are chosen is. If you say that it is 1, because it is a given that you will be chosen so it has probability 1, it always happens, you will get 0.5. If you say that the probability that you will be chosen is 5/N for any finite pool size N, then you will get 0.1.

All of these can be true simultaneously without contradiction or error. They are starting from different axioms and therefore are solving different problems and so it makes sense that they arrive at different results.

predictedYES

@Sailfish Exactly!

And after >800 comments and various clarifications, I'm still not sure whether @dreev knows that Snake Eyes is posed as "Can we prove it's one and not the other?", but has been "evolving towards" (by clarifications and comments pointing something like 80-20 towards) asking "Can we prove the one side I'm on using the respective set of axioms?"

predictedNO

@Primer my market is at least honest in this regard. I clearly stated that it's resolution depends on me changing my mind. And I am not going to update the description to make my current position more solid.

predictedYES

@KongoLandwalker @Primer I'd just like to see the axioms articulated. Also please be clear what you're accusing me of. The edit history is available (just note that the first edit to my market description doesn't count -- no one had traded yet).

@KongoLandwalker for my part when I first presented this example P(chosen|heads) would be 1/n, P(chosen|tails) = 9/n. P(chosen) <> 1 but P(chosen|chosen) = 1, which is true for any P(X|X) = 1.

Both systems of math are internally consistent but answer different questions. My intention was that you would observe that you were chosen at random such that the distribution of selection matters, rather than completely eliminate the selection step by setting P(chosen)=1 as a constructed requirement.

If P(chosen|heads) = 1 = P(heads|chosen)*P(chosen)/P(heads) and P(heads) = 0.5 then P(heads|chosen)*P(chosen) = 0.5

And P(chosen|tails) = 1 = P(tails|chosen)*P(chosen)/P(tails) and P(tails) = 0.5 then P(tails|chosen)*P(chosen) = 0.5

yields P(tails|chosen)=P(heads|chosen)=0.5 which is the case when P(chosen) = 1 such as you bribed your way into the game

Whereas

If P(chosen|heads) = 1/n = P(heads|chosen)*P(chosen)/P(heads), P(heads)=0.5 then P(heads|chosen)*p(chosen)= 1/n*(1/2)

And P(chosen|tails) = 9/n= P(tails|chosen)*P(chosen)/P(tails) and P(tails) = 0.5 then P(tails|chosen)*P(chosen) = 9/n*(1/2)

results in P(tails|chosen) = 9/(2n*P(chosen)) and P(heads|chosen) = 1/(2n*P(chosen))

P(tails|chosen)+P(heads|chosen) = 1, we can add these because now we have the same conditional chosen, and heads + tails is exhaustive of the probability space, so

9/(2n*P(chosen)) + 1/(2n*P(chosen)) = 10 / (2n* P(chosen) )= 1

Then P(chosen) = 5/n , HERE P(chosen) is a distribution and P(chosen)<>1 because we didn't bribe our way in.

Then P(tails|chosen) = 9/(2n*5/n) = 9/10

Then P(heads|chosen) = 1/(2n*5/n) = 1/10

I reiterate, there is a difference between a conditional and a restriction. I believe all of dreev's clarifications are consistent with the intent to condition on selection, not to force selection.

predictedNO

@dreev your FAQ3 proposes a finite case which is just impossible to be achieved in the game. "If nobody dies then the game finishes" is just another game.

FAQ3 is the wrong approach by any means.

Many times it was said in your market: limit going to infinity should not iterate over all ROUNDS of a game (what you do by truncating the game prematurely and making all live), but it should iterate over ALL POSSIBLE LEGAL GAMES.

The first such game is the one which legally finished in the first round. 100% death rate and 1/36 probability of this game.

The second game is the one which legally ended in its second round. 2/3 deathrate and 35/36*1/36 probability of this specific case.

Limit( N going to infinity)[Sum over all games (i from 1 to N) { (deathrate of the game which ended in round i) * (probability of a game that ended in round i)}]

Even YES party guys use basically this formula, but they apply multiplication by "probability of being chosen into game" (which is 1, but they incorrectly take some fraction which depends on infinity) inside the SUM operator.

predictedYES

@KongoLandwalker Wanna make a version of the snake eyes market that does FAQ3 correctly and we can see how it goes? My contention is that FAQ3 is the best we can do.

predictedYES

Probability theory as a science is defined around repetition of the whole experiment.

and

limit going to infinity should not iterate over all ROUNDS of a game (what you do by truncating the game prematurely and making all live), but it should iterate over ALL POSSIBLE LEGAL GAMES.

This is one possible approach to probability theory, specifically, the frequentist approach. I believe everyone already agrees that this will give you >50% in the Snake Eyes problem, at least, nobody disagreed when I wrote out that argument.

For what it's worth, I'm fairly certain the frequentist position is either undefined, the argument being that no statement can be made about a given unknown probability, or greater than 1/2. In the long run, in repeated runs of the Snake Eyes Paradox, the probability that a given bettor dies is greater than 1/2 (because this smuggles in the condition that the runs terminate, i.e. that snake eyes is rolled).

There is no "probability theory as a science", there are only sets of axioms you accept as true to solve problems, or not. It's important to be able to state your axioms and then do the math to get both results in both cases, you should be able to do the calculations so that you arrive at 1/36, or >50%, or arrive at 1/10 or 1/2 in this case. If you can't do this then you don't understand the other side's position properly. Both are "correct" in the sense that they contain no mathematical errors and follow from the stated axioms. What we are asking is which is more useful.

predictedNO

@Sailfish

frequentist approach

If there is nothing wrong in my approach, then the only way this resolves YES if anybody convinces me that {the axioms I am using} make less sense/less practical usage/less coherent with reality than in some other approach.

predictedYES

@dreev Taking the clarifications and your comments into account, Snake Eyes turned into:

You're offered a gamble where a pair of six-sided dice are rolled and unless they come up snake eyes you get a bajillion dollars. If they do come up snake eyes, you're devoured by snakes.

So far it sounds like you have a 1/36 chance of dying, right?

Now the twist. First, I gather people willing to play the game. I take 1 person from that pool and let them play. Then I take 2 people and have them play together, where they share a dice roll and either get the bajillion dollars each or both get devoured. Then I do the same with 4 people, and then 8, 16, and so on.

At some point I run out of people and I stop.

Is the probability that you'll die, given that you're chosen to play, still 1/36, even for an unlimited amount of people?

predictedYES

@Primer What was it before the clarifications and comments?

predictedYES

@KongoLandwalker

then the only way this resolves YES if anybody convinces me that {the axioms I am using} make less sense/less practical usage/less coherent with reality than in some other approach.

I agree! I think the following framing is correct, if we care about what is "practical" or "coherent with reality" then we care about the following question, "Should we play this game".

Consider the case where we can bet some amount, and if we win, we double our money, if we lose, we lose whatever we bet. What is our expected value? Well, we only care about games where we actually play, all others are irrelevant since we cannot either lose what we bet or win. Even if we try and ignore the pool we are being chosen as one of every single person in existence, at the very most, but you could also consider being offered this bet in an auditorium or whatever situation you think makes the most sense. In any case, we already exist and everyone else who might play already exists since nobody is being summoned or created or destroyed as if by magic. So, considering the case where we actually have money at stake "given that we are chosen to play", should we play the game at these odds? The answer is yes, there is a 1/10 chance that you lose, and a 9/10 chance that you win. This is positive expected value, so we agree to play. The reason that you consider being chosen at random from a pool is because this is how it would actually work in practice if you were to actually play this game. This is true in a single shot or in the long run or however you care to view it, it is true if you approach it as a frequentist or a bayesian or if you use SSI or SSA, the only relevant question is if you consider the prior probability of being chosen as 5/N or as 1, based only on your interpretation of the question. I am making the claim that "chosen" means "selected at random from a pool" because this is how we choose things in real life. If you make the claim that "given that you are chosen" means that you are guaranteed to play in every iteration of the game regardless of if the coin lands heads or not, that you are not selected at random from a pool but instead are guaranteed to be selected, the probability that you win or lose is only the probability of the coin coming up heads or tails, and you break even in the long run.

predictedYES

@dreev Text is still up:

predictedYES

@Sailfish

Well, we only care about games where we actually play, all others are irrelevant since we cannot either lose what we bet or win.

I'm not sure about that. I throw a dice. On 1, 2 and 3, you lose, on 4, 5 and 6 you win. You're chosen to play if I didn't throw 1. Do you want to play?

(Not sure whether this a good example, still very unsure about everything)

@Primer here is my final argument against your 1/5.

According to that we have:

The probability you see a robot with index 1 is 1/5. The probability to see robot 2 is 1/5. The probability to see a red robot is P(we see 1 OR we see 2) = 1/5+1/5 = 2/5

From the other end probability of Heads is 1/2. The probability of seeing red robot, if Heads, is 100%. So probability of seeing a red robot is 1/2*100%=1/2.

So 1/2 = 2/5.

SIA is just mathematically wrong.

With my "tree approach" everything both adds up to 1, and adds up to 0.5 within each main branch. With 0.25 0.25 1/6 1/6 1/6 you get 50% from both sides.

@KongoLandwalker

YES if anybody convinces me that {the axioms I am using} make less sense/less practical usage/less coherent with reality than in some other approach.

The point of axioms is that they are not provable, nor are they disprovable. The best you can do is find a paradox that arises from adoption of an axiom and then decide if you are comfortable or not living with that paradox, such as the Banach–Tarski paradox and the Axiom of Choice. I am not sure that we have an axiomatic difference besides possibly what a mathematician means when they phrase a conditional probability question as Given observe A, what is your [relative] probability of B to mean P(B|A) as opposed to a restriction of A to such that P(A)=1 so you provide the answer P(B|A) is just P(B), I would argue that setting P(A)=1 and then just answering P(B) is boring and not a question worth asking. I believe the Snake Eyes market is supposed be like "Ooooh, that is weird/interesting consequences when you condition on A where P(A)=0 (in the limit)."

This coin toss market is your market your question, but when I posed the question I was asking in the context where P(A) <> 1, but I don't feel like you care to acknowledge my question/clarifications, but rather have latched on to P(A)=1. I do not disagree with your math for P(A)=1, I just don't think it is interesting and it was not the scenario I was trying to talk about which apparently inspired this market.

predictedYES

@Primer Text is still up? I've missed some context here.

predictedYES

@Primer

I'm not sure about that. I throw a dice. On 1, 2 and 3, you lose, on 4, 5 and 6 you win. You're chosen to play if I didn't throw 1. Do you want to play?

Yeah, this is positive expected value. If it's not clear consider the narrower case. Set the problem up the same way, and say I'm only chosen to play if you throw 6. Do I want to play? Of course, I can't lose. Add a few more cases in and it's the same, I still win in expectation. If it's still not clear find a die and do it with pen and paper, that's worthwhile at this point.

predictedYES

@KongoLandwalker

My credence of seeing robot 1 is 1/4.

The robot's credence of being robot 1 is 1/5.

This is analogous to Sleeping Beauty.

predictedNO

@Primer but there is no structural difference.

You get the knowledge the robot in front of you was created in the described process.

He gets the knowledge, that he was created in the described process.

You both have ANSOLUTELY the same information, but you and robot will give different answers?

predictedYES

@dreev The current phrasing of the paradox as in your question.

predictedYES

@Sailfish Sorry, seems like I couldn't verbalize this clearly. My example was trying to argue against your

we only care about games where we actually play, all others are irrelevant since we cannot either lose what we bet or win.

If games where we lose are excluded, we should care about that. Or maybe I misunderstood you.

predictedYES

@KongoLandwalker Not really. The robot would also say that my credence on seeing robot 1 should be 1/4, and I'd say his own credence should be 1/5.

predictedNO

@Primer you literally proved why this SIA approach is not working. You both are thinking entities. You both recieve the same information. You give different answers. Maths is not about opinions, robot cannot say "well, depends on the perspective". Math is about calculation, and what you demonstrated is 0.25=0.2 statement.

What is the probability that Primer and Red robot with index 1 are in the same room? It has only one answer.

We know the statement 0.25=0.2 is wrong, so the SIA assumption is wrong. It gives the mathematical inequality not only here, but also with the fact that 2 red robots don't add up to 50% of seeing a red robot. 0.2*2=0.5.

predictedYES

@KongoLandwalker 0.25 and 0.2 are probabilities (maybe better: credences or betting odds) for different questions.

How would you bet if you were Sleeping Beauty?

predictedNO

@Primer the whole thing you try to say by sending me to the sleeping beauty is "under SIA all cases have equal probabilities".

I have shown you that SIA itself is a wrong assumption by showing you that the way of thinking leads to 0.25=0.2.

That is not a valid argumentation to refer to some existing "school of thought". Some of schools have just wrong statements.

Imagine, for example, that I have a wrong belief system and I try to "prove flat earth" not by making experiment or calculations, but by sending you to a website which states "there are millions of people who believe in flat earth".

predictedYES

@KongoLandwalker I'm just trying to basically say "those are different questions" and I'm pointing to Sleeping Beauty as it seems to be an analogous framing, but has an advantage: it is well known and there is plenty of literature supporting each side.

predictedNO

@Primer I understand, ok. But Sleeping problem is not a paradox to me. It is a problem, where one side has everything consistent and in other side position there is a similar error of not matching probabilities "from top and above" (from coin toss and from adding specific robots provabilities).

Also my point is, the answer in pure math should depend only on a question, not on who you are. And the question is the same for both "what is probability Primer and robot 1 have <Le rendez-vous>".

predictedYES

@KongoLandwalker

Also my point is, the answer in pure math should depend only on a question, not on who you are. And the question is the same for both "what is probability Primer and robot 1 have <Le rendez-vous>".

Uhlala ❤

So you ask me "Will you, Primer, make sweet love to robot 1?", and I will answer "With 25% probability" Then you run the experiment. Enter a robot, noticing he is red but not knowing his number. You ask him "Will you, red robot, make sweet love to Primer?" He will answer "Lucky me! Chances are 50:50!"

@Primer no. Robot does not know he is red! (He might not even be red). He only knows what is stated in the problem, you make stuff up.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules