https://en.m.wikipedia.org/wiki/Sleeping_Beauty_problem
The Sleeping Beauty problem is a puzzle in decision theory in which whenever an ideally rational epistemic agent is awoken from sleep, they have no memory of whether they have been awoken before. Upon being told that they have been woken once or twice according to the toss of a coin, once if heads and twice if tails, they are asked their degree of belief for the coin having come up heads.
Resolves based on the consensus position of academic philosophers once a supermajority consensus is established. Close date extends until a consensus is reached.
References
Self-locating belief and the Sleeping Beauty problem, Adam Elga (2000) - https://www.princeton.edu/~adame/papers/sleeping/sleeping.pdf
Sleeping Beauty: Reply to Elga, David Lewis (2001) - http://www.fitelson.org/probability/lewis_sb.pdf
Sleeping Beauty and Self-Location: A Hybrid Model, Nick Bostrom (2006) - https://ora.ox.ac.uk/objects/uuid:44102720-3214-4515-ad86-57aa32c928c7/
The End of Sleeping Beauty's Nightmares, Berry Groissman (2008) - https://arxiv.org/ftp/arxiv/papers/0806/0806.1316.pdf
Putting a Value on Beauty, Rachael Briggs (2010) - https://joelvelasco.net/teaching/3865/briggs10-puttingavalueonbeauty.pdf
Imaging and Sleeping Beauty: A case for double-halfers, Mikaël Cozic (2011) - https://www.sciencedirect.com/science/article/pii/S0888613X09001285
Bayesian Beauty, Silvia Milano (2022) - https://link.springer.com/article/10.1007/s10670-019-00212-4
Small print
I will use my best judgement to determine consensus. Therefore I will not bet in this market. I will be looking at published papers, encyclopedias, textbooks, etc, to judge consensus. Consensus does not require unanimity.
If the consensus answer is different for some combination of "credence", "degree of belief", "probability", I will use the answer for "degree of belief", as quoted above.
Similarly if the answer is different for an ideal instrumental agent vs an ideal epistemic agent, I will use the answer for an ideal epistemic agent, as quoted above.
If the answer depends on other factors, such as priors or axioms or definitions, so that it could be 1/3 or it could be something else, I reserve the right to resolve to, eg, 50%, or n/a. I hope to say more after reviewing papers in the comments.
The thing is the key controversy around this question is if you are asking what the probability of the coin coming up heads is or the probability that the coin is observed as heads. Since however you state „they are asked their degree of belief for the coin having come up heads.“ this implies that what matters in your question is the probability of the observation which is just 1/3 or 2/3 depending on your version of the problem so I have voted yes.
@Ebcc1 you observe that the coin is heads if and only if it indeed came up heads in this iteration of the experiment. Why would those probabilities be different then?
What is 1/2 is the ratio of coin being Heads in iterations of experiment
What is 1/3 is the ratio of awakenings on Heads.
The controversy is, whether formal mathematical concept of probability that the coin is Heads, conditionally on awakening in the experiment should correspond to the first ratio or the second one.
@a07c usually the entire question is posed incorrectly because of that inconsistency It is unclear if what is being asked about is the rate of heads or the rate of observing heads. However in this case the question is specifically about the degree of belief that the outcome of the coin flip was heads conditioned on being awakened. This means that for this particular version of the question because the ambiguity of asking simply for „the probability of heads“ doesn’t exist the answer is just 1/3.
The reason I can be so confident about this is because If I ask you about your degree of belief about the value of a revealed coin flip you won’t tell me „it’s fifty fifty“ you’ll tell me the value that you saw because that is what I’m asking for.
usually the entire question is posed incorrectly because of that inconsistency It is unclear if what is being asked about is the rate of heads or the rate of observing heads.
Do you mean that Halfers are talking about unconditional P(Heads) =1/2, while Thirders are talking about conditional P(Heads|Awake) = 1/3? And the whole disagreement is in ambiguity between which of these probabilities is asked for?
If so, I respectfully disagree. I grant you, there are people who are simply confused about such ambiguity, but absolute majority of those, who have engaged with the problem to the point of being actually able to grasp the underlying math, are not.
However in this case the question is specifically about the degree of belief that the outcome of the coin flip was heads conditioned on being awakened.
Yep. No ambiguity here. And yet I claim that P(Heads|Awake) = 1/2 while you that it's 1/3. Seems that our disagreement is about somewhere else. Let's try to figure out what exactly it is.
If I ask you about your degree of belief about the value of a revealed coin flip you won’t tell me „it’s fifty fifty“ you’ll tell me the value that you saw because that is what I’m asking for.
Good example. Suppose you tossed the coin and it happened to be Heads.
Before you showed me the coin I was 1/2 confident: P(Heads) = 1/2. This corresponds to the fact that in about half the coin tosses that you make and do not show me the coin is Heads to the best of my knowledge. If we repeat this experiment multiple times, I'll guess Heads correctly only in about 50% of cases. The ratio of Heads among all iterations of probability experiment is 1/2
What happens when you show me that the coin is Heads? You provide me with evidence, which strength is proportional to how rare the event that I've just observed is. This event (you showing me Heads) happens only in half of the iterations of the probability experiment (tossing a coin and showing me the outcome): P(Heads) = 1/2. It happens in every iteration where the coin is Heads: P(Heads|Heads) = 1 and in no iterations where the coin is Tails P(Heads|Tails) = 0. Therefore I update two fold in favor of Heads.
Now I'm completely confident P(Heads|Heads) = 1. This corresponds to the fact that in 100% of coin tosses that you make and show me that the outcome is Heads, the coin is indeed Heads to the best of my knowledge. If we repeat this experiment (tossing the coin and showing me that it's Heads) multiple times, in every iteration the coin will be Heads. The ratio of Heads is 100% among all iterations of the experiment.
Now, with that in mind, let's go back to Sleeping Beauty.
Before I awake in the experiment P(Heads) = 1/2. For the same reasons as with any coin toss that I know nothing about.
What happens when I awake in the experiment. I observe that I'm awake. Is it evidence in favor of some outcome? This event (my awakening) happens in every iteration of the experiment where the coin is Tails: P(Awake|Tails) = 1. But likewise it happens in every iteration where the coin is Heads: P(Awake|Heads) = 1. This event simply happens in every iteration of the probability experiment: P(Awake) = 1 therefore it doesn't allow me to distinguish between iterations of the experiment where the coin is Heads or Tails: P(Heads|Awake) = P(Heads) = 1/2
So even after awakening I keep my initial credence of 1/2. This corresponds to the fact that in about Half the iterations of a Sleeping Beauty probability experiment in which I awake, the coin is Heads. The ratio is 1/2 among all iterations of probability experiment.
Are you with me so far?
You can actually do this experiment in real life, although a bit convoluted:
Flip a coin at home. If it is heads, go out and explain the Sleeping Beauty Problem to one person, with that person being the "awakened" state on Monday. Then explain your experiment, how you flipped a coin to decide how many people to interview (of course, only tell them the fact that you flipped a coin, don't tell them how the coin landed), and ask them whether they think the coin landed heads or tails. If your coinflip is tails, do the same but with two different people, each person being an "awakened" state (don't tell them whether they are the first or second person). Again ask them whether they think the coin landed heads or tails.
Every single person you asked was given the exact same information as the sleeping beauty upon awakening: solely the premise of the experiment. Therefore, each person asked is equivalent to a Sleeping Beauty awakening.
Do this whole experiment, say, 50 times. Heads should be flipped 25 times, and tails should be flipped 25 times, for a total of 25 people interviewed on a heads-flip and 50 people interviewed on a tails-flip. Naturally, for 1/3 of the people asked, "heads" would have been the correct answer - for the other 2/3, the correct answer was "Tails".
Then, for the Sleeping Beauty, whose informational state is exactly the same as that of the people asked, what should be her credence, upon awakening, that the coin flipped heads? Obviously 1/3.
@luvkprovider I'd like to see what argument you have against this hyperintuitive explanation. I've done my absolute best to simultaneously keep it idiot-proof and idiot-accessible.
@Gameknight By providing a "hyperintuitive explanation" instead of a formal proof, you’ve already admitted that your argument is weaker than mine. Unless you think that arguments based on intuition are stronger than arguments based on formal logic and rigorous proof. If you need a "hyperintuitive explanation" for why your scenario is completely different from the Sleeping Beauty problem, here it is: If you are a person living on Earth, you are not guaranteed to be interviewed, but in the Sleeping Beauty problem, Beauty is guaranteed to wake up.
"By providing a "hyperintuitive explanation" instead of a formal proof, you’ve already admitted that your argument is weaker than mine."
Okay, so you still refuse to prove it. Just because an explanation is simpler doesn't mean it's wrong. You use formal proofs to mask your small but very important mistakes, leading to incorrect results that take a very long time to disprove. Just because something is intuitive, it doesn't mean that it's wrong. I have exactly 5 apples, and exactly 3 are red. Therefore intuitively I must have exactly 2 non-red apples, because I must have exactly 5 total apples. Is it wrong because it's an intuitive explanation? Of course not, you dolt.
Whether you are guaranteed to be interviewed or not is irrelevant - each person is asked the question during their interview, so anyone who is asked is guaranteed to be interviewed (because they are in the process of being interviewed). Everyone not interviewed is equivalent to the people who aren't the Sleeping Beauty - they are irrelevant.
"Very ironic, considering you thought that 0.9999… ≠ 1"
No I didn't, that's both an ad-hominem (insults the person instead of confronting the argument) and a straw-man (You are putting words in my mouth)
@Gameknight Yes, I agree that some non-formal explanations are right, but that doesn’t mean your explanation must be right. You are saying that your explanation is not wrong because there exist other explanations that are not wrong - clearly unreasonable. I only said that your argument is much weaker than mine because you provided an "intuitive" explanation while I provided formal proofs. This is clearly unarguable.
Of course when you are interviewed, you are guaranteed to be. But I obviously meant before the experiment starts. A person on Earth is not guaranteed to be chosen to be interviewed before the experiment starts, but Beauty knows she will wake up. Therefore, your explanation is wrong because it fails to capture this crucial aspect of the problem.
"You are saying that your explanation is not wrong because there exist other explanations that are not wrong - clearly unreasonable. I only said that your argument is much weaker than mine because you provided an "intuitive" explanation while I provided formal proofs. This is clearly unarguable."
Once again, you are shoving words in my mouth - The point I was making is that my proof is no weaker for being intuitive.
"A person on Earth is not guaranteed to be chosen to be interviewed before the experiment starts, but Beauty knows she will wake up. Therefore, your explanation is wrong because it fails to capture this crucial aspect of the problem."
Ah, but I have. The interviewed people being told the premise of the question is essentially Sleeping Beauty being told the premise before she gets put to sleep. Telling them that they are part of the experiment is essentially waking them up. They "wake up" with knowledge of the premise, which is equivalent to Sleeping Beauty being reminded of the premise when she wakes up.
More crucially, Beauty knowing that she will wake up is irrelevant because she does not remember anything upon waking up and is only told the premise of the experiment and the fact that she is participating in it. She is asked the question after she is woken up and told the premise of the experiment. Your insistence on the "she knows she will wake up" idea proves conclusively that either:
You have not read the question properly
You are stupid
You are being disingenuous
As I'm sure you are not #1 or #2, I can only assume it is case #3.
@Gameknight The difference is that when Beauty is told the premise of the experiment, she is told that she would have been woken up anyway, but when the people in the interview are told the premise, they are told they were lucky to be chosen because they were not guaranteed to be chosen. If this were not the case, say, the interviewers had a favorite person that they were going to interview no matter what (and the person knows this), then, from the favorite person’s point of view, the probability of heads would remain 1/2 after the interview.
@luvkprovider So now you're just ignoring the fact that Beauty forgets everything when she wakes up, so anything told to her previously is irrelevant. Thanks for reminding me that you are an ass who refuses to admit they're wrong.
@Gameknight Yes, Beauty forgets everything when she wakes up, but she is also told the premise of the experiment, otherwise how is she even supposed to have a probability. Did you even read my argument?
@luvkprovider So you agree that she forgets that she is told that she would have been woken up anyways, and therefore whether she is told that she is woken up or whetehr the people are randomly selected is thoroughly irrelevant.
@Gameknight She is told after she wakes up. Otherwise, imagine if you just wake up in an experiment and someone just asks you for a probability without telling you what the experiment is.
@luvkprovider Yes, she is told the premise of her experiment. Just like the people interviewed. Therefore each person interviewed is equivalent to an awakening and therefore my experiment is accurate.
@Gameknight Yes. But you didn’t read my argument. I said that when Beauty is told the premise of the experiment, she is told that she would have been woken up anyway, but when the people in the interview are told the premise, they are told they were lucky to be chosen because they were not guaranteed to be chosen.
@luvkprovider When people in the interview are told the premise, they are also implicitly told that they would have been asked regardless of whether the coin flipped heads or tails - because they are being asked. If you want, you can think of the people being preselected to be asked in an ordered list rather than being randomly selected. I.e. the next person is guaranteed to be told the premise and asked the question, regardless of whether the coin flips heads or tails.
@luvkprovider Now you're just pulling semantics. You know full well the informational state of each person asked is identical to Beauty upon awakening.
Each person asked knows that someone would have been asked regardless of whether the coin flipped heads or tails - the only difference is who it was, which makes 0 difference. This is equivalent to Sleeping Beauty being told upon awakening that she would have been awakened either way.
@Gameknight What’s wrong with pulling semantics if what you are saying doesn’t make sense in the first place? Again, there’s a difference between "are" and "would have". Just because you "are" doesn’t mean you "would have" or everything would have probability either 0 or 1.
@Gameknight Yes, they know that someone would have been asked, but they don’t know that they would have been asked.
@luvkprovider You're using circular reasoning.
I say that it doesn't matter -> you say it matters
I say that there is essentially no difference, disproving that it matters, because who is asked is irrelevant to how they are analogous to a Beauty awakening -> you say that there is a difference, the difference in who is asked
I again repeat that the difference is irrelevant -> you petulantly cry "nuh-uh" and again repeat that there is a difference
@Gameknight Ok, but you didn’t defend your claims. The difference in who is asked should be very relevant, and if you claim it’s irrelevant, you need to explain why.
@luvkprovider It's irrelevant, because the person asked is given the same info regardless of who was asked. You claim that it's relevant, so you have the burden of proof.
@Gameknight Yes, they are given the same info, but it’s not true that they would have been given the same info. Stop relying on false arguments.
@luvkprovider "Yes, they are given the same info, but it’s not true that they would have been given the same info." Okay, now you are just throwing up word salad
@Gameknight And you don’t take the time to understand what it means. The difference between "are" and "would have" is very important. For example, in the modified Monty Hall problem where the host randomly opens a door and it just so happens not to have the car in it, it doesn’t matter whether you switch because you have the same 50% probability anyway. But if the host would not have opened the door with the car in it, then you should switch because then you have a 2/3 chance of getting the car. But people like you would argue that these two situations are the same because the host didn’t open the door with the car anyway.
@luvkprovider This is a bad analogy. The difference between the Sleeping Beauty Problem and my experiment is not whether the door is opened or not - it's whether he opens it before or after he tells you he's about to open it. Either way he will open the door, and either way you should switch to the other unopened door.
@Gameknight I didn’t talk about whether the door is opened or not. In the first situation where the host opens a door randomly and you find it doesn’t have the car in it, switching and not switching will both have a 1/2 probability of being correct. Also, I’m not sure what you mean by "tells you he’s about to open it". Do you mean "tells you the details of the experiment"?
@luvkprovider Yes, you did talk about whether the door was opened or not:
"But people like you would argue that these two situations are the same because the host didn’t open the door with the car anyway."
My point was that the difference between the Sleeping Beauty Problem and the experiment I suggested is equivalent to the difference between the host in the Monty Hall problem telling you that he's opening the door before/after he opens it. As in, both differences do not change the correct answer to either problem.
Since you have proven to be purely disingenuous at every turn, I suppose it's high time I stopped arguing with an idiot trying to pull me down to his level.
@Gameknight Yes, it is irrelevant whether the host tells you he’s opening the door before/after he opens it. In fact, it is irrelevant whether the host tells you he’s opening the door at all. But that doesn’t prove anything about the relation between the Sleeping Beauty problem and your experiment.
Also, I did not talk about whether the door was opened or not. I talked about whether the host picked a door randomly or knew which door the car was in. Read my comments.