Best explanation of how some probabilities can be more correct than others
Ṁ200 / 200
bounty left

Human probability judgements can obviously be bad in the sense of being biased, and not matching the conclusions drawn by an ideal Bayesian reasoner given the same information. This is not what I'm asking about.

Alice and Bob are both ideal Bayesian reasoners considering whether the new iPhone will sell at least 100 million copies. Alice knows nothing about the event, so her estimate is 50%. Bob works for Apple and knows that the new iPhone kind of sucks, so his estimate is 20%. Bob knows strictly more information, and if both of them bet on their credence, Bob will make money from Alice on expectation.

Now Alice learns all the information that Bob had about the iPhone being a poor product, but also learns that, as opposed to what had previously been announced, there's not going to be an Android release that year. Taking all this new information into account, her credence remains 50%. Now Alice knows strictly more information than Bob, and Alice is the one who will make money on expectation.

We can imagine a more complex situation where neither of them has strictly more information than the other, and presumably the one that has more total bits of information about the event is the one who will profit on expectation.

This means that when discussing a subjective probability, the probability itself is not the only meaningful number; it also matters how many bits of information were used to derive it. This feels odd.

I have no specific question, this is open-ended. Am I missing anything? Do you have an alternative way of thinking about this that might make more intuitive sense?

Get
Ṁ1,000
to start trading!
© Manifold Markets, Inc.TermsPrivacy