
https://en.m.wikipedia.org/wiki/Sleeping_Beauty_problem
The Sleeping Beauty problem is a puzzle in decision theory in which whenever an ideally rational epistemic agent is awoken from sleep, they have no memory of whether they have been awoken before. Upon being told that they have been woken once or twice according to the toss of a coin, once if heads and twice if tails, they are asked their degree of belief for the coin having come up heads.
Resolves based on the consensus position of academic philosophers once a supermajority consensus is established. Close date extends until a consensus is reached.
References
Self-locating belief and the Sleeping Beauty problem, Adam Elga (2000) - https://www.princeton.edu/~adame/papers/sleeping/sleeping.pdf
Sleeping Beauty: Reply to Elga, David Lewis (2001) - http://www.fitelson.org/probability/lewis_sb.pdf
Sleeping Beauty and Self-Location: A Hybrid Model, Nick Bostrom (2006) - https://ora.ox.ac.uk/objects/uuid:44102720-3214-4515-ad86-57aa32c928c7/
The End of Sleeping Beauty's Nightmares, Berry Groissman (2008) - https://arxiv.org/ftp/arxiv/papers/0806/0806.1316.pdf
Putting a Value on Beauty, Rachael Briggs (2010) - https://joelvelasco.net/teaching/3865/briggs10-puttingavalueonbeauty.pdf
Imaging and Sleeping Beauty: A case for double-halfers, Mikaël Cozic (2011) - https://www.sciencedirect.com/science/article/pii/S0888613X09001285
Bayesian Beauty, Silvia Milano (2022) - https://link.springer.com/article/10.1007/s10670-019-00212-4
Small print
I will use my best judgement to determine consensus. Therefore I will not bet in this market. I will be looking at published papers, encyclopedias, textbooks, etc, to judge consensus. Consensus does not require unanimity.
If the consensus answer is different for some combination of "credence", "degree of belief", "probability", I will use the answer for "degree of belief", as quoted above.
Similarly if the answer is different for an ideal instrumental agent vs an ideal epistemic agent, I will use the answer for an ideal epistemic agent, as quoted above.
If the answer depends on other factors, such as priors or axioms or definitions, so that it could be 1/3 or it could be something else, I reserve the right to resolve to, eg, 50%, or n/a. I hope to say more after reviewing papers in the comments.
@AdamSpence Both SIA and SSA are wrong, because they treat non-isomorphic problems as isomorphic. See this comment thread where I talk about these problems and clearly demonstrate this difference: https://manifold.markets/MartinRandall/is-the-answer-to-the-sleeping-beaut#40wc4d8j3ji
As for doomsday argument, which the paper you cite appeals to, here I explain how one should reason about it. No SIA required:
@IsaacLinn I think it is. The "but a coinflip has a 1/2 chance of landing heads" side is beating out the "okay, so here's my explanation on why it's 1/3:" side just because the halfer position is shorter, simpler, and quippy.
@inaccessibles Honestly, I don't understand your proof, and I don't think "proof" means what you think it means. Claude rebutted your post. It's in the comment you've already referenced.
@IsaacLinn The only thing Claude said that wasn't unnecessary rambling was that Y = X "is incorrect". It is obvious that Y = X, and if you disagree you should say why.
I empathize. I also hoped that after I carefully explained the correct halfer model and reasoning a lot more people would change their mind, than actually did. My advice is to be patient, treasure the rare moments of realization that do indeed happen and remember that this market doesn't simply depend on the objective truth of the matter, but also on philosophical consensus about it, which... isn't exactly unfailable
So I don't think that <10% is reasonable to expect in the short term. But I suppose we can bring it down to 33% for the sake of dramatic irony if nothing else.
@IsaacLinn I'm sorry if the tone of some comments offends you. I'd like to make it explicit that none is stupid just for holding a particular stance on Sleeping Beauty. It's a confusing problem originated from an initially confused approach to probability theory, and throughout decades more and more confusion was accumulated. Some people are basically luckier than others, to initially have their intuitions in the right place to see through all of it.
That said, there are reasons for confidence. After you've managed to grasp the correct model and its implications and see how all the confusion resolves, the problem appears to be very easy. Previously you've expressed the desire to actually figure out the correct answer. I think at this point I've addressed all the object level concerns that you've raised here. What is still not not to your satisfaction? Why do you still feel that you can't be confident in double halfism?
I made a market about this question: https://manifold.markets/inaccessibles/will-i-win-my-m104-stake-httpsmanif
Here is a proof that the answer to the Sleeping Beauty problem is 1/2:
Let Beauty's world before the experiment starts be represented by the σ-algebra Σ on X with probability measure P. Let "Heads-Before" be the event that the coin will land heads, considered before the experiment. Let "Heads-After" be the event that the coin landed heads, considered after Beauty's awakening. Finally, let Y be the measurable subset of X that represents the world that Beauty enters after she is awakened (and she learns that she is awakened), and let Q be the probability measure on the σ-algebra on Y that inherited from X. Since no possible worlds are excluded when Beauty wakes up, we get that Y = X. Now, by the Theorem of Deduction, we get that Q(Heads-After) = P(Heads-Before ∩ Y)/P(Y) = P(Heads-Before ∩ X)/P(X), and applying some trivial identities:
Heads-Before ∩ X = Heads-Before
P(X) = 1
So Q(Heads-After) = P(Heads-Before)/1 = P(Heads-Before), and as long as you're willing to accept that the probability before the experiment that the coin will land heads is 1/2, you get that the probability after Beauty's awakening that the coin landed heads is 1/2. I hope no one will think that their intuition (or even worse, generative AI) is more trustworthy than a completely formal and rigorous proof based on logic.
@luvkprovider that's the same argument, just with symbols that are hard to read instead of words that are easy to read. You're saying the probability doesn't change just because she wakes up. I'm saying it does. Convince me otherwise using stories or examples, and I might come to your side.
Claude is sad that you don't trust him. Also, he told me this:
"The commenter's proof has a subtle but critical flaw in how it handles conditionalization when there are multiple possible awakenings. Let me break down the specific error:
The key mistake is in the statement "Since no possible worlds are excluded when Beauty wakes up, we get that Y = X". This seems intuitive but is actually incorrect.
The reason why: When Beauty wakes up, she's not just learning "I am awake" as a simple fact about the world. She's experiencing a specific awakening event, which could be either:
- The Monday awakening (happens in both heads and tails)
- The Tuesday awakening (happens only in tails)
So when Beauty wakes up, she's actually in a more fine-grained possibility space than before the experiment. The original space X needs to be expanded to account for these distinct awakening events. In probability terms:
- If heads: One awakening occurs (Monday)
- If tails: Two awakenings occur (Monday and Tuesday)
The commenter's proof implicitly treats all these awakenings as the same event, which is why they conclude Y = X. But this collapses an important distinction - each awakening is a separate possible way Beauty could be experiencing the world, even though they're subjectively indistinguishable to her.
This is why the thirder position counts three possible awakening events (1 from heads, 2 from tails) rather than just two possible coin outcomes. The halfer's proof fails to properly account for this multiplication of possibilities in the tails case.
In technical terms, this is related to what philosophers call "de se" information - information about where/when you are in the world, rather than just what the world is like. The proper handling of de se information in probability theory is precisely what the Sleeping Beauty problem helps illuminate."
@AndrewHebb your case is quite unique.
On one hand, you are less confused than most thirders, as you rightly believe that credence can not change by event that was guaranteed to be realized in the experiment and you refused to be persuaded by such magical words as "de see evidence" or "centered possible worlds".
On the other hand, you are more confused than most thirders, as you reason backwards from the assumption that P(Heads|Awake) =1/3 arrive to the conclusion that unconditional probability for Heads is 1/3 even before the experiment has started and then fail to notice all the absurdity it entails.
I'm not sure what is our crux, frankly. Lets try to find it.
Do you agree that a priori probability of a coin to come Heads is 1/2?
Do you agree that about half the coin tosses, determining the awakening routine in Sleeping Beauty are Heads?
Do you agree that you don't have a way to predict future coin toss better than chance?
Do you agree that if you know that the coin toss is going to determine the awakening routine in Sleeping Beauty, you can not predict it's outcome better than chance?
Do you agree that if you can't predict a coin toss better than chance, your credence in it is 1/2?
When Beauty wakes up, she's not just learning "I am awake" as a simple fact about the world. She's experiencing a specific awakening event, which could be either:
- The Monday awakening (happens in both heads and tails)
- The Tuesday awakening (happens only in tails)
This is the core mistake. She does experience awakenings. But those are not events.
It's actually quite easy to see if you are being rigorous. An event is a set of one or more mutually exclusive outcomes of the experiment. If any of the outcomes of this set is realized in an iteration of the experiment it means that the event is realized in this iteration.
If Monday awakening and Tuesday awakening were two well-defined events in Sleeping Beauty experiment, then there has to be some mutually exclusive outcomes which these events consist of. Either the events themselves are mutually exclusive - none of the outcomes they consist of are the same, or they are mutually inclusive - they share at least one outcome.
If we suppose that Monday awakening and Tuesday awakening are mutually exclusive, we immediately arrive to a contradiction - on Tails both of the awakenings happen in the same iteration of the experiment, therefore they are not mutually exclusive.
Therefore, these events have to be mutually inclusive. But this contradicts your premise that the events correspond to the individual awakenings.
The way to formally define Monday and Tuesday events is this:
Monday = {Heads, Tails}
Tuesday = {Tails}
Where in semantic terms:
"Monday" means "Monday awakening happens in this iteration of the experiment", which happens every time.
"Tuesday" means "Tuesday awakening happens in this iteration of the experiment", which happens only when the coin is Tails.
So yes, one indeed has to treat both tails awakening as the same event in order not to contradict probability theory.
Another way to show that you can't reason about individual awakening as if they are mutually exclusive random events, which may be more intuitive, is this:
Suppose the Beauty is told that the coin is Tails, therefore she is awakened twice. What is her credence that this awakening is happening on Monday? 50%. What about the other awakening? What is her credence that it has/to be happened on Monday? Also 50%. But then her credence that at least one of her awakenings happens on Monday can be calculated as:
1 - P(This is Monday)P(Other is Monday) = 1 - 1/4 = 3/4
Which contradicts the fact that Beauty knows that she is to be awakened on Monday in every iteration of the experiment.