A scenario which occurred to me and I found strange at first glance: Consider a fair coin, and two people—Alice who is 99.9% sure the coin is fair and who can update on evidence like a fine Bayesian, and Bob who says he’s perfectly sure the coin is biased to show heads and does not update on the evidence at all.
Nonetheless the perfectly correct Alice (who effectively needs choose randomly and might as well always say ‘heads’) and the perfectly incorrect Bob (who always says ‘heads’ because he’s always certain that’ll be the correct answer) have the same chance (50%) to correctly predict the next coin’s toss. Even when the experiment is repeated multiple times, its progress further confirming to Alice that she is right to believe the coin fair, Alice’s predictive ability isn’t improved over non-updating Bob’s on a toss-by-toss basis.
I found that initially perplexing—If we consider accuracy alone, Alice’s more accurate beliefs can only be perceived if she’s allowed to make predictions over large patterns (e.g. she’d expect a roughly equal number of heads to tails). If she’s not given that ability, and if a third party is only told the number of times each of the participants were correct in their guesses, they couldn’t tell who is who.
One more thing that distinguishes them: If Alice and Bob were allowed to bet on their guesses, Alice would accept only favorable odds, and Bob would soon go bankrupt...
Doesn’t seem very strange to me. For any (realistic) situation, there are any number of irrelevant false beliefs that you could have while still managing to predict the result correctly. Or even relevant false beliefs that nonetheless produced the right prediction: e.g. a tribe that believed in spirits might believe that sexual intercourse attracted a disembodied spirit into a woman’s body and caused it to grow a new body for itself, which would be false but still lead to the correct prediction of (intercourse → pregnancy).
The difference between them becomes apparent once they start betting on other things, like the number of tails in a series of 10 coinflips. The question is: what is special about betting on heads vs. tails of a fair coin that doesn’t allow Alice to do any better than Bob?
I think it is better to say that bet on offer is fair. It is not a property of just the coin, but also of the bet. We do not notice that there is a choice of bet because it is even odds (which corresponds to max ent), but for any weighted coin there is a corresponding fair bet.
Fair bets do have lots of special properties, but we would have the same situation if a correct choice of tails paid 1 and a correct choice of heads paid 2: Alice and Bob would both always bet H. (except in the 1/1000 chance that we start with 10 Ts and Alice updates wrongly; but the asymptotics are the same)
I think you’re assuming that Alice has to pick H or T randomly and then ask the third party if it’s correct. But she doesn’t have to do that. She can just ask the third party whether it’s H, each time. Over time it will be confirmed that the coin is fair.
Yes, but my point was that her knowledge that the coin is fair doesn’t help her improve her guesswork on the text toss over Bob, and someone judging her on the basis of her toss-by-toss successes wouldn’t be able to ascertain that she has more accurate beliefs than Bob…
A scenario which occurred to me and I found strange at first glance: Consider a fair coin, and two people—Alice who is 99.9% sure the coin is fair and who can update on evidence like a fine Bayesian, and Bob who says he’s perfectly sure the coin is biased to show heads and does not update on the evidence at all.
Nonetheless the perfectly correct Alice (who effectively needs choose randomly and might as well always say ‘heads’) and the perfectly incorrect Bob (who always says ‘heads’ because he’s always certain that’ll be the correct answer) have the same chance (50%) to correctly predict the next coin’s toss. Even when the experiment is repeated multiple times, its progress further confirming to Alice that she is right to believe the coin fair, Alice’s predictive ability isn’t improved over non-updating Bob’s on a toss-by-toss basis.
I found that initially perplexing—If we consider accuracy alone, Alice’s more accurate beliefs can only be perceived if she’s allowed to make predictions over large patterns (e.g. she’d expect a roughly equal number of heads to tails). If she’s not given that ability, and if a third party is only told the number of times each of the participants were correct in their guesses, they couldn’t tell who is who.
One more thing that distinguishes them: If Alice and Bob were allowed to bet on their guesses, Alice would accept only favorable odds, and Bob would soon go bankrupt...
Doesn’t seem very strange to me. For any (realistic) situation, there are any number of irrelevant false beliefs that you could have while still managing to predict the result correctly. Or even relevant false beliefs that nonetheless produced the right prediction: e.g. a tribe that believed in spirits might believe that sexual intercourse attracted a disembodied spirit into a woman’s body and caused it to grow a new body for itself, which would be false but still lead to the correct prediction of (intercourse → pregnancy).
The case of a fair coin seems particularly bad for Alice, being as it were maximally entropic.
The difference between them becomes apparent once they start betting on other things, like the number of tails in a series of 10 coinflips. The question is: what is special about betting on heads vs. tails of a fair coin that doesn’t allow Alice to do any better than Bob?
A fair coin is maximally entropic. There is no skill that will let you do anything with sheer chaos.
I think it is better to say that bet on offer is fair. It is not a property of just the coin, but also of the bet. We do not notice that there is a choice of bet because it is even odds (which corresponds to max ent), but for any weighted coin there is a corresponding fair bet.
Fair bets do have lots of special properties, but we would have the same situation if a correct choice of tails paid 1 and a correct choice of heads paid 2: Alice and Bob would both always bet H. (except in the 1/1000 chance that we start with 10 Ts and Alice updates wrongly; but the asymptotics are the same)
I think you’re assuming that Alice has to pick H or T randomly and then ask the third party if it’s correct. But she doesn’t have to do that. She can just ask the third party whether it’s H, each time. Over time it will be confirmed that the coin is fair.
Yes, but my point was that her knowledge that the coin is fair doesn’t help her improve her guesswork on the text toss over Bob, and someone judging her on the basis of her toss-by-toss successes wouldn’t be able to ascertain that she has more accurate beliefs than Bob…