The Doctor
You, a perfectly rational agent, awake in a cell. You take stock of your surroundings, and find them bare, except for a small pamphlet next to you, entitled "Food for Thought". You open it and begin to read, it informs you that you have come to be as part of an insidious experiment run by "The Doctor", a rogue logician. The Doctor, whose motives you find impenetrable, will create in turn some group of N people, starting with one and doubling each time. Each group's fate, once created, will be determined by a complex mechanism tucked away deep in a mountain whose operation is unfathomable to you. However, the pamphlet assures you that the process is truly fair and random, no group will be favored over any other. The mechanism will output either LIFE or DOOM. The experiment ends once DOOM is output. Your fate, should the mechanism decide DOOM, will to be devoured by the Beast, whose description does not bear repeating. The Doctor reasons that since the group which receives the judgement of DOOM is always larger than the sum of all members of the groups that are spared, the Beast will not become enraged by a larger possibly more delicious cohort who remain un-devoured. You look around your cell, neither you nor it have any distinguishing features whatsoever. You consider that you are a perfectly rational agent, and try and decide exactly how worried you should be about your possible beastly consumption. After you fix your answer in your mind, a booming disembodied voice tells you that the mechanism has judged twelve trillion, four hundred and sixty-three billion, nine hundred and seven million, three hundred and four thousand, and five times before your upcoming judgement. Does your fear diminish?
End of Part One
Part Two
@Primer I can add whatever qualifiers you want but I believe all relevant information is delivered from a God's eye view. If the voice changes your answer I'm happy to specify that it was not lying, if that helps.
@Sailfish Can't get my head around the fact that a rational agent would think he was pranked or similar. The storytelling clashes with hypothetical thinking mode.
@Primer I actually don't think it matters, if you consider two rational agents with different priors, they still cannot just disagree on those priors, each should believe that his or her prior better reflects the actual state of the universe in some way. If I was asking you to bet, consider which agent with which priors would do better.
If you want to skip ahead and engage with the substance, I've already posted the answer here (I was somewhat impatient)
https://manifold.markets/dreev/is-the-probability-of-dying-in-anth#7tkxIkgI3Dgb4E6fcy0k
I promise it is sufficiently dry and lacking in artistic flair.
@MartinRandall Interesting, is there a reason you can't assign a conditional probability? For example, even if you are explicitly told you are being simulated, surely you can reason about how likely certain events in the simulation are. I somewhat explicitly left a jailbreak so I guess I am curious about what your argument for "Other" would be and how you would formalize it.
@Sailfish Well, for example, if I assign a conditional probability, does that make it more likely that The Doctor places me in the situation? If so it's better to refuse to reason unless the hypothetical is a net positive experience.
@MartinRandall I would not have guessed "Pre-commit to refusing to act as an agent so you are less likely to be simulated" as a possible answer, I guess I'll add it however you would like to phrase it. I might tender that it is net positive in the case where you have a very near zero chance of being eaten, and that reasoning about the simulation requires you to evaluate it in some way. For example, if you were to be delivered, say, a cupcake if you survived, you would need to evaluate the probability of survival to determine if it was net positive or not.