
Anthropic reasoning poses challenges to common sense that seem hard to accept, but also hard to extricate from normally accepted statistics and probability.
The author of the blog Rising Entropy describes a famous thought experiment that relies on apparently well-founded assumptions:
Suspend your disbelief for a moment and imagine that there was at some point just two humans on the face of the Earth – Adam and Eve. This fateful couple gave rise to all of human history, and we are all their descendants. Now, imagine yourself in their perspective.
From this perspective, there are two possible futures that might unfold. In one of them, the two original humans procreate and start the chain of actions leading to the rest of human history. In another, the two original humans refuse to procreate, thus preventing human history from happening.
For the sake of this thought experiment, let’s imagine that Adam and Eve know that these are the only two possibilities (that is, suppose that there’s no scenario in which they procreate and have kids, but then those kids die off or somehow else prevent the occurence of history as we know it).
By the above reasoning, Adam and Eve should expect that the second of these is enormously more likely than the first. After all, if they never procreate and eventually just die off, then their birth orders are 1 and 2 out of a grand total of 2. If they do procreate, though, then their birth orders are 1 and 2 out of at least 100 billion. This is 50 billion times less likely than the alternative!
Now, the unusual bit of this comes from the fact that it seems like Adam and Eve have control over whether or not they procreate. For the sake of the thought experiment, imagine that they are both fertile, and they can take actions that will certainly result in pregnancy. Also assume that if they don’t procreate, Eve won’t get accidentally pregnant by some unusual means.
This control over their procreation, coupled with the improbability of their procreation, allows them to wield apparently magical powers. For instance, Adam is feeling hungry and needs to go out and hunt. He makes a firm commitment with Eve: “I shall wait for an hour for a healthy young deer to die in front of our cave entrance. If no such deer dies, then we will procreate and have children, leading to the rest of human history. If so, then we will not procreate, and guarantee that we don’t have kids for the rest of our lives.”
Now, there’s some low prior on a healthy young deer just dying right in front of them. Let’s say it’s something like 1 in a billion. Thus our prior odds are 1:1,000,000,000 against Adam and Eve getting their easy meal. But now when we take into account the anthropic update, it becomes 100 billion times more likely that the deer does die, because this outcome has been tied to the nonexistence of the rest of human history. The likelihood ratio here is 100,000,000,000:1. So our posterior odds will be 100:1 in favor of the deer falling dead, just as the two anthropic reasoners desire! This is a 99% chance of a free meal!
This is super weird. It sure looks like Adam is able to exercise telekinetic powers to make deer drop dead in front of him at will. Clearly something has gone horribly wrong here! But the argument appears to be totally sound, conditional on the acceptance of the principles we started off with. All that is required is that we allow ourselves to update on evidence of the form “I am the Nth human being to have been born.” (as well as the very unusual setup of the thought experiment).
The author then proceeds to describe a parallel scenario that could actually occur in the future, if humanity is able to establish a World Government with power over whether humans colonize the rest of the galaxy or not.
This question is asking if this type of scenario would empirically result in the instrumentally intended results, such as a deer dropping dead in front of the anthropic reasoners who were acting like Adam and Eve.
And God blessed them, saying, "Be fruitful, and multiply!" But Adam and Eve would not be tricked.
"Look, let's suppose that if it were possible to procreate there might later be 100 billion people in my reference class," explained Adam. "More, honestly, if we treat birds and fish as in our reference class. Probably hundreds of trillions, though I hear nobody really knows how many fish there are."
God: "Um-"
Adam: "The chance that I would be literally the first person alive is tiny. Whereas if procreation isn't possible, it's like, 1/2. If I suppose some reasonable prior on being infertile (or Eve being infertile, but I don't want to be rude)-"
Eve: "Such a feminist. 🙄"
Adam: "Anyway, the self-sampling assumption clearly shows that we can know that reproduction isn't possible. That's why it didn't work earlier when I tried to kill that deer over there with my mind."
God: "You what?"
Eve, turning to God: "Can you, like, take one of my ribs and use it to make me a new husband?"
Adam: "I doubt it."
I don't think you can hunt deer purely with firm commitments.
Imo it's not less likely to have the birth order they have if there are more people in total. Probability is in the mind. Its the quantification of uncertainty. They know their birth order, there is no uncertainty there. It seems like the argument implicitly assumes a uniform distribution over possible birth orders which is not justified by the setup imo.
Note that I'm not an indexical uncertainty hater in general. For example, I'm sure that 1/3 is the correct answer to sleeping beauty, but in that case there is actually uncertainty over an indexical statement.
(Disclaimer: I've read Nick Bostrom's Anthropic Bias quite a long while ago, I basically only remember a few key things (and CRAP=Completely Ridiculous Anthropic Principle), so the above might be mistaken)
@koadma Thanks for sharing your perspective.
They know their birth order, there is no uncertainty there.
Isn't it what comes after their birth order which is the uncertain factor here? You can know your lottery number with perfect certainty, but that doesn't in itself tell you about whether you'll win or not.
@singer Hmm, I'm not sure I follow your analogy. In the case of a lottery, there are two relevant numbers: yours and the winning one. Their matching is uncertain. What's the analogous winning number for Adam and Eve (I think I get that the lottery size is the population size and your number is the birth order)?
I'm curious what you'd answer for my analogous problem (or what disanalogies you'd point at):
Traditional sleeping beauty, with one difference: when Beauty is awoken the first time, they are told that this is their first awakening. What should Beauty's credence be that the coin landed heads at that time?
To spell out the analogy:
Awakenings = people in history
Coin lands head(1 awakening) = Adam and Eve do not reproduce
Awakening order = Birth order
Should Beauty reason the following way after being awoken on the first day and being told that this is their first awakening: "Beauty should expect that the coin landing heads is twice more likely than it landing tails. After all, if it landed heads, then their awakening order is 1 out of a grand total of 1. If it landed tails though, then their awakening order is 1 out of 2. This is 2 times less likely than the alternative!"
The difference I see is that in the deer example the future uncertainty depends on a commitment that forces events to go one way or another. But in the Sleeping Beauty example the uncertainty is about a past event that can't be affected. But I am a "thirder", and maybe you can help spot the inconsistency in my reasoning about the deer problem.
The "real" SSA answer would be that 2nd awakening Beauty belongs to a different reference class than either of the 1st awakening Beauties, right? But that isn't how I was intuitively thinking of it.
It seems to me like rejecting anthropic deer hunting is going to lead to weirder conclusions in the long run, and that it's easier to just admit that in some weird edge cases of probability, you get weird outcomes. Weird goes in, weird comes out. Like with MWI, it's just a cost of pushing science beyond the situations our brains evolved to deal with. Better to trust the math.
@singer Does that mean you think you can hunt deer with anthropic reasoning? I don't see any votes for yes.
@MaxHarms I usually don't vote on my own polls [edit: otherwise I would vote Yes]. The deer example doesn't actually work in real life, but deer-like examples like the World Government one seem valid. The alternative seems to be rejecting the Copernican principle (in a form).
@singer I’m also fond of https://joecarlsmith.com/2021/09/30/sia-ssa-part-1-learning-from-the-fact-that-you-exist but I think the ACX post is a more accessible response to the notion that Copernicanism should be extended in the way you suggest.
(to be clear, I think Copernicanism shouldn't be extended. I'm very much in agreement with the ACX post)
You caught me redhanded conflating SSA with anthropics in my mind, and when I made this question I had forgotten the difference between SIA and SSA. I really should have just said "indexical uncertainty with unusual implications" in my first comment instead of "anthropic deer hunting". This question's example was just meant as a stand-in for those types of situations, and I wasn't thinking in a very specific way about them.
I do remember reading Joe Carlsmith's series and admiring the endorsement of the philosopher in the Presumptuous Philosopher thought experiment. It's time I reread it.