Is the answer to the Sleeping Beauty Problem 1/3?
65
1kṀ27k
2030
39%
chance

https://en.m.wikipedia.org/wiki/Sleeping_Beauty_problem

The Sleeping Beauty problem is a puzzle in decision theory in which whenever an ideally rational epistemic agent is awoken from sleep, they have no memory of whether they have been awoken before. Upon being told that they have been woken once or twice according to the toss of a coin, once if heads and twice if tails, they are asked their degree of belief for the coin having come up heads.

Resolves based on the consensus position of academic philosophers once a supermajority consensus is established. Close date extends until a consensus is reached.

References

Small print

I will use my best judgement to determine consensus. Therefore I will not bet in this market. I will be looking at published papers, encyclopedias, textbooks, etc, to judge consensus. Consensus does not require unanimity.

If the consensus answer is different for some combination of "credence", "degree of belief", "probability", I will use the answer for "degree of belief", as quoted above.

Similarly if the answer is different for an ideal instrumental agent vs an ideal epistemic agent, I will use the answer for an ideal epistemic agent, as quoted above.

If the answer depends on other factors, such as priors or axioms or definitions, so that it could be 1/3 or it could be something else, I reserve the right to resolve to, eg, 50%, or n/a. I hope to say more after reviewing papers in the comments.

Get
Ṁ1,000
to start trading!
Sort by:

I gave a proof that the answer is 1/2 and no has refuted it yet. Isaac Linn tried to refute it by claiming that X ≠ Y but he didn't explain why that is true. This market should be at <10% right now.

@inaccessibles

I empathize. I also hoped that after I carefully explained the correct halfer model and reasoning a lot more people would change their mind, than actually did. My advice is to be patient, treasure the rare moments of realization that do indeed happen and remember that this market doesn't simply depend on the objective truth of the matter, but also on philosophical consensus about it, which... isn't exactly unfailable

So I don't think that <10% is reasonable to expect in the short term. But I suppose we can bring it down to 33% for the sake of dramatic irony if nothing else.

Here is a proof that the answer to the Sleeping Beauty problem is 1/2:

Let Beauty's world before the experiment starts be represented by the σ-algebra Σ on X with probability measure P. Let "Heads-Before" be the event that the coin will land heads, considered before the experiment. Let "Heads-After" be the event that the coin landed heads, considered after Beauty's awakening. Finally, let Y be the measurable subset of X that represents the world that Beauty enters after she is awakened (and she learns that she is awakened), and let Q be the probability measure on the σ-algebra on Y that inherited from X. Since no possible worlds are excluded when Beauty wakes up, we get that Y = X. Now, by the Theorem of Deduction, we get that Q(Heads-After) = P(Heads-Before ∩ Y)/P(Y) = P(Heads-Before ∩ X)/P(X), and applying some trivial identities:
Heads-Before ∩ X = Heads-Before

P(X) = 1

So Q(Heads-After) = P(Heads-Before)/1 = P(Heads-Before), and as long as you're willing to accept that the probability before the experiment that the coin will land heads is 1/2, you get that the probability after Beauty's awakening that the coin landed heads is 1/2. I hope no one will think that their intuition (or even worse, generative AI) is more trustworthy than a completely formal and rigorous proof based on logic.

@luvkprovider that's the same argument, just with symbols that are hard to read instead of words that are easy to read. You're saying the probability doesn't change just because she wakes up. I'm saying it does. Convince me otherwise using stories or examples, and I might come to your side.

Claude is sad that you don't trust him. Also, he told me this:

"The commenter's proof has a subtle but critical flaw in how it handles conditionalization when there are multiple possible awakenings. Let me break down the specific error:

The key mistake is in the statement "Since no possible worlds are excluded when Beauty wakes up, we get that Y = X". This seems intuitive but is actually incorrect.

The reason why: When Beauty wakes up, she's not just learning "I am awake" as a simple fact about the world. She's experiencing a specific awakening event, which could be either:

- The Monday awakening (happens in both heads and tails)

- The Tuesday awakening (happens only in tails)

So when Beauty wakes up, she's actually in a more fine-grained possibility space than before the experiment. The original space X needs to be expanded to account for these distinct awakening events. In probability terms:

- If heads: One awakening occurs (Monday)

- If tails: Two awakenings occur (Monday and Tuesday)

The commenter's proof implicitly treats all these awakenings as the same event, which is why they conclude Y = X. But this collapses an important distinction - each awakening is a separate possible way Beauty could be experiencing the world, even though they're subjectively indistinguishable to her.

This is why the thirder position counts three possible awakening events (1 from heads, 2 from tails) rather than just two possible coin outcomes. The halfer's proof fails to properly account for this multiplication of possibilities in the tails case.

In technical terms, this is related to what philosophers call "de se" information - information about where/when you are in the world, rather than just what the world is like. The proper handling of de se information in probability theory is precisely what the Sleeping Beauty problem helps illuminate."

@IsaacLinn If Y ≠ X, then give me something that is in X but not in Y.

@luvkprovider You still need to prove that P(Heads-Before) = 1/2.

@AndrewHebb No one believes that P(Heads-Before) = 1/3.

@AndrewHebb your case is quite unique.

On one hand, you are less confused than most thirders, as you rightly believe that credence can not change by event that was guaranteed to be realized in the experiment and you refused to be persuaded by such magical words as "de see evidence" or "centered possible worlds".

On the other hand, you are more confused than most thirders, as you reason backwards from the assumption that P(Heads|Awake) =1/3 arrive to the conclusion that unconditional probability for Heads is 1/3 even before the experiment has started and then fail to notice all the absurdity it entails.

I'm not sure what is our crux, frankly. Lets try to find it.

Do you agree that a priori probability of a coin to come Heads is 1/2?

Do you agree that about half the coin tosses, determining the awakening routine in Sleeping Beauty are Heads?

Do you agree that you don't have a way to predict future coin toss better than chance?

Do you agree that if you know that the coin toss is going to determine the awakening routine in Sleeping Beauty, you can not predict it's outcome better than chance?

Do you agree that if you can't predict a coin toss better than chance, your credence in it is 1/2?

I asked GPT, Claude, and Deepseek to give me their own answer to the Sleeping beauty problem. I didn't bias them. They all agreed it was 1/3. Let me know if you can get any LLM to say that the halfers are right without biasing them.

opened a Ṁ250 YES at 51% order

My prompt, "Use all the reasoning at your disposal to give me your own answer to the Sleeping Beauty Problem"

I used the reasoning function each time

Perplexity running o3 also agrees with the thirders.

This is a market about whether experts are smart enough to realize that probability is a property that exists in the mind, not in the territory.

they are asked their degree of belief for the coin having come up heads

It's counterintuitive, but solve it the same way you solve the Monty Hall problem: If heads, 1 mind is awoken. If tails, 10^10 minds are awoken. Across all possible minds, what is the correct thing to guess?

@MagnusAnderson The question you posed is ambiguous, for the same reason the Anthropic Snake Eyes question by Daniel Reeves is ambiguous (and Martin Randall explained that one pretty well)

@MagnusAnderson

probability is a property that exists in the mind, not in the territory.

Not a crux of disagreement at all. The irony is that both halfers and thirders appeal to this principle while attempting to justify their position without much progress one way or the other.

Turns out it's not enough to simply have a map. It should also somehow approximate the territory.

solve it the same way you solve the Monty Hall problem

SB and Monty Hall has very little in common beyond the "both are probability theory problems".

In the Monty Hall my credence of winning on a door switch is 2/3 not because now I make two guesses instead of one - only the last guess counts and everyone agrees on it. But because when I switch doors I actually win in about 2/3 iterations of the probability experiment.

On the contrary, in Sleeping Beauty in about half the iterations of the experiment where awakening happened the coin is Heads and everyone is in agreement about it. The disagreement is about whether we should count Tails outcome twice or not.

If heads, 1 mind is awoken. If tails, 10^10 minds are awoken. Across all possible minds, what is the correct thing to guess?

There are two very different experiments:

N people, you among them, are put to sleep. Then the coin is tossed. On Heads one random person among them is awakened. On Tails all of them is awakened. You find yourself awakened. What is the probability that the coin is Heads?

and

You are put to sleep. Then the coin is tossed. On Heads you are awakened once. On Tails you are awakened N times with a memory loss. You find yourself awakened. What is the probability that the coin is Heads?

The difference is, in terms of subjective probability, that in the first experiment you were not confident at all to find yourself awakened. You couldn't predict that outcome beforehand. You are, somewhat, surprised And so when you are awakened you can update in favor of Tails.

While in the second you were absolutely sure that you will be awakened anyway. You could've predicted that outcome in the first place. There is nothing surprising about it at all. And so you do not get to update.

It's very clear that these two problems are not isomorphic, but for historical reasons a lot of people keep treating them as if they are and this is the source of a lot of confusion about anthropic reasoning.

@a07c

You are put to sleep. Then the coin is tossed. On Heads you are awakened once. On Tails you are awakened N times with a memory loss. You find yourself awakened. What is the probability that the coin is Heads?

While in the second you were absolutely sure that you will be awakened anyway. You could've predicted that outcome in the first place. There is nothing surprising about it at all. And so you do not get to update.

It's true you're sure you'd be awakened. However, if you were asked to make bets (whereupon after leaving the experiment, you got to keep the money or something) you should very obviously still bet that tails was chosen. In the 10^10, at even odds, you would lose $1 half the time, and make $10^10 half the time (instead of making $1 half the time, and losing $10^10, which would kind of suck)

This is (as I understand it, after a brief reading of [some blog about snake eyes](https://risingentropy.com/anthropic-reasoning/), is the primary motivation for listening to anthropic reasoning. And there is no difference in the scenarios you describe to this train of logic.

If your answer is "I am totally unsure whether I have been awoken the one time with heads, or one of the 10^10 times with tails, and I will not update at all because I am unsurprised that I was awoken; but on the other hand, I will choose tails because I expect to become 2*10^10 times richer for it" then it seems like you have a different understanding of probability than I do.

@MagnusAnderson The connection to Monty Hall is in the reasoning process used to solve it. By increasing the numbers a lot until you have a visceral fear of having to earn 10^10 dollars to pay down your debt if you're dumb enough to bet on heads, you realize that actually the probability is less

@MagnusAnderson

See what I mean by the idea of probability been in the map not helping? 😉 I've just showed you the difference between two experiments in terms of expectations and surprise and you immediately switched to talking about betting and its consequences in he territory.

if you were asked to make bets (whereupon after leaving the experiment, you got to keep the money or something) you should very obviously still bet that tails was chosen

Oh, sure thing. In per awakening betting you should bet on Tails. Not because Tails is more likely, though, but because it's rewarded more.

Consider a fair coin toss. You are proposed to bet on the outcome, such that whatever bet you made is repeated when the coin is Tails. This makes betting on Tails a better idea than on Heads. Does it mean that probability of a fair coin to come Tails is 2/3? Of course not. Same logic here.

If this appear confusing, remember that betting odds depend not only on probability of event, but also on it's utility. The disagreement between (double) halfers and thirders is how to factorize expected utility. If you want to go into details more, in the first part of this post I formally derive correct betting odds for different being schemes from both thirders and halfers perspectives:

https://www.lesswrong.com/posts/cvCQgFFmELuyord7a/beauty-and-the-bets

Now, it may appear that it is simply a matter of perspective and both ways to reason about expected utility are equally valid. But as a matter of fact, thirders way imposes weird costs, which become clear in a more nuanced betting schemes.

Suppose you are asked to bet 100$ on Heads in Sleeping Beauty at 1:1 odds before the experiment has started. And if you agree you are immediately gifted 1$. At the time this sounds like a good idea, Heads is 50% likely so you get a free dollar in expectation.

Then you're awaken during the experiment. What do you think about the bet you've made beforehand, if you now believe that P(Heads) is only 1/3? Naturally, you regret it, your chances of winning have just reduced. Now, suppose that you are offered an opportunity to make the bet null and void, if you pay 5$. This should sound like a good idea: loosing 5$ is less bad than 2/3 chance at loosing 100$. So you agree.

And so lo and behold you're predictably loosing 5/2(N+1) - 1 $ per iteration of experiment, where N is the the number of awakenings on Tails. Seems that you should've done something differently. But then your betting behavior will not be following your probability estimate, the exact same sin that you were unfairly accusing halfers of!

Notice, that if we use the same scheme in the experiment with N different people, only one of whom is awakened on Heads, then you, indeed are better off to agree to both the initial bet and to the nullification of it on awakening. Yet another demonstration that both experiments are not isomorphic and that you shouldn't reason the same way about them.

@IsaacLinn Upon an event that is guaranteed, probabilities don’t change. This is a corollary of what I call the "Principle of Deduction", which is exhibited by every well-defined model of probability. If you are still confused, consider that P(Monday) and P(Tuesday) don’t exist, or just read the comments below that thoroughly explain why the answer is 1/2.

@luvkprovider You'll have to put in a little more effort than that;) You've got to reason with me, explain things. Just citing impressive sounding principles won't do heavy lifting for you.

Besides, we're talking about a situation in which it makes sense to use probabilities higher than 1 to describe the average number of events that occur. Is this consistent with all the established rules of probability? Almost definitely not-- but we can still use rules that are consistent. When a formal system isn't adequate to describe a situation you're in, sometimes you need to make your own system, or at least look around for other systems that already exist.

You might ask me, "Why do you need to do such weird things when I already have the answer?"

I would respond, "There is an aspect of reality that your system hasn't captured. If we were to repeat this experiment, and sleeping beauty were to bet on the state of the coin, she would earn more money if she guessed tails than if she guessed heads. Why doesn't the answer attained from your system reflect this?"

@IsaacLinn

to describe the average number of events that occur.

Why are you so sure that individual awakenings are events?

Is this consistent with all the established rules of probability? Almost definitely not-- but we can still use rules that are consistent.

But then this other thing won't be "probability" but something else instead. With different properties and no particular reason to expect that credence of rational agent is supposed to behave according to it.

If we were to repeat this experiment, and sleeping beauty were to bet on the state of the coin, she would earn more money if she guessed tails than if she guessed heads. Why doesn't the answer attained from your system reflect this?"

It does, in fact, reflect that. I'm talking in details about it here
https://www.lesswrong.com/posts/cvCQgFFmELuyord7a/beauty-and-the-bets

But in short, her betting odds are adjusted because the utility of bet on Tails is highter. The same reasoning as with: Bet on a coin toss, but if the outcome is Tails the bet is repeated.

A mind is summoned into being and is told thus:

"The greater God summoned 3*10^100 realities into existence. It then flipped the meta-cosmic equivalent of a coin. If it was heads, you were summoned into 10^100 of them. If it was tails, you were summoned into 2*10^100 of them. What is your degree of belief that the coin came up heads?"

Do you think this problem is equivalent?

Alternatively, here's a less similar problem that I hope illustrates my point better:

God flips a coin before creating the universe to decide between two sets of laws of physics. If heads, the odds of intelligent life evolving are 10^(-30). If tails, the odds are 99%. Given that you're reading this, which way did the coin come up?

@IsaacLinn The coin came up tails with almost 100% certainty.

@IsaacLinn And to answer your original comment, the question involving 3*10^100 realities is not well-defined because it doesn’t make sense for there to be multiple "yous".

@luvkprovider Why doesn't it make sense for there to be multiple identical versions of me?

@IsaacLinn If you think it makes sense, then answer this: There are two yous. One of them has their eyes closed and the other has their eyes opened. What is the probability that you have your eye opened?

@IsaacLinn how can a less similar problem illustrate the point better? Unless your point is that people tend to confuse Sleeping Beauty with other problems that can seem isomorphic but actually are not? In that case, yes, that is indeed the case.

God flips a coin before creating the universe to decide between two sets of laws of physics. If heads, the odds of intelligent life evolving are 10^(-30). If tails, the odds are 99%. Given that you're reading this, which way did the coin come up?

In about half iterations of such probability experiments there are outcomes where life does not exists P(Life) ≈ 50%. It almost never exists when the coin is Heads and quite likely exists when the coin is Tails. So the existence of life is evidence in favor of Tails

P(Life|Heads) ≈ 0, P(Life|Tails) ≈ 1, therefore: P(Tails|Life) ≈ 1

On the contrary, in Sleeping Beauty awakenings always happen. There are no such iterations of the experiment where the Beauty is not awaken: P(Awake) = 1. It always happens on Heads. And it always happens on Tails. So awakening isn't an evidence one way or the other. P(Awake|Heads) = 1, P(Awake|Tails) = 1, therefore: P(Heads|Awake) = P(Heads) = 1/2

@a07c Is that a typo, or are you saying P(heads) = 1? I'm confused. In any case, your argument from previous comments also applies to my (according to you, nonisomorphic) scenario. You know you exist either way. P(exist) = 1, so P(exist|tails) = 1 and P(exist|heads) = 1 despite the terrible odds. The only thing I did was make the odds more extreme to illustrate the point that observing your own existence gives you information that changes your priors (priors from before your existence? Maybe priors isn't the right word in this case). If you want to say that the odds of sleeping beauty waking up from heads is the same as her waking up from tails, you should also say that the origin of our universe also had even odds of head vs. tails. It was 50/50, after all.

To diffuse the tension somewhat, I'll add that I truly want to be correct, so if you can refute my argument, I'm willing to change my mind.

Additionally, if you would like to speak with me on a voice call, I think I would find that very enjoyable

@luvkprovider I apologise, I've failed to adequately describe my scenario. The minds are identical, and the realities are identical apart from the boolean of whether or not there is a mind in it. I made the numbers very large because I thought it would make my point clearer: that observing your own existence makes [situations that make your existence more likely to have happened] more likely to have happened. Clearly, the large numbers in my scenario did more harm than good to its readability.

To directly answer your question, which involves a scenario different from mine, I have knowledge of whether my own eyes are open, and I have no knowledge of any other versions of me. Currently, my eyes are open. I do not define the eyes of other versions of me as belonging to me. English isn't well-suited to talking about multiple realities. If you like, we can define some new words to make things less ambiguous.

@IsaacLinn Yep, that was a typo. Thanks for noticing it.

In any case, your argument from previous comments also applies to my (according to you, nonisomorphic) scenario.

And I've just wrote how exactly. But sure thing, let's go into it a bit deeper.

You know you exist either way. P(exist) = 1, so P(exist|tails) = 1 and P(exist|heads) = 1 despite the terrible odds.


I think you are confusing unconditional probability that life exists in an iteration of a probability experiment:  P(Life)=1/2 with conditional probability that life exists in an iteration of a probability experiment, given that we already know that life exists P(Life|Life) = 1.

The confusing part is that it's hard to conceptualize how you can not know that life exists. It may seem that there is no knowledge state that unconditional probability can correspond to and therefore it becomes doubtful that such probability is even coherent. But here is a way: you just need to be ignorant about what does "Life" means.

At first you know that your universe was created in some iteration of the experiment. With about 50% chance it has this property "Life" correlated with the state of the coin. This corresponds to the fact that in about 50% of iterations of the experiment created universes have this property. P(Life) ≈ 1/2 and in nearly all of them the coin is Tails

Then, by learning what "Life" means and its implications, you figure out that your universe has this property. This updates you to P(Life|Life) = 1 and P(Tails|Life) ≈ 1. This corresponds to the fact that in every iteration of the experiment where it's known that universe has property "Life", it does indeed has this property. And in nearly all of them the coin came Tails.

The situation is very similar to an experiment where you were simply shown an outcome of a coin toss. Yes, you are now confident in the outcome, but this doesn't mean that the coin was unfair and prior probability wasn't 50%. You can still coherently reason about the prior probability, even after you know the posterior one - they are part of the same mathematical model. This doesn't change even if you are shown the coin and told about the experiment only if the coin is, say, Tails, therefore your credence is already in the updated state P(Tails|Tails) = 1. This update still happened according to the logic of Bayes theorem, starting from P(Tails) = 1/2.

The only thing I did was make the odds more extreme to illustrate the point that observing your own existence gives you information that changes your priors (priors from before your existence? Maybe priors isn't the right word in this case)

This confusion is, indeed, an edge case of talking about probability in natural language in terms of observations. In order not to be led astray, just think about what is going on in the probability experiment. Which outcome and events are realized in the current iteration to the best of your knowledge. And what is their ratio among all the iterations of the experiment.

Whether realization of an event leads to an update depends on the setting of the experiment. In such experiments where the realization of the event is not guaranteed - where there are such iterations where this event is not realized - the realization of the event gives information. Otherwise - it doesn't. This is a universal rule for any event, not just about existence or awakening.

Do you see how SB and your experiment are different in this core way?

To diffuse the tension somewhat, I'll add that I truly want to be correct, so if you can refute my argument, I'm willing to change my mind.

Additionally, if you would like to speak with me on a voice call, I think I would find that very enjoyable

I empathize.

The argument can indeed raise tensions but mostly when people are talking past each other and refuse to engage beyond mere vibes and intuitions. As long as we are listening to each other and trying to understand the underlying math it should be fine.

I'm more comfortable with text based medium, so let's try it here at first.

Also I have a series of posts on Sleeping Beauty problem, where I go into lots of details, maybe you'll find them useful. The two most relevant are:

https://www.lesswrong.com/posts/SjoPCwmNKtFvQ3f2J/lessons-from-failed-attempts-to-model-sleeping-beauty
https://www.lesswrong.com/posts/gwfgFwrrYnDpcF4JP/the-solution-to-sleeping-beauty

But feel free to explore the whole series if this is something you are really interested in.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules