I.e. one that is calibrated so that if you pay your expected winnings every time and we perform this experiment lots of times then your average winnings will be zero—assuming I’m using the same source of unfair coins each time.
This may be tangential, but how do you run this experiment lots of times? If you just abort the runs where you get any tails among the initial 100 throws, then I think seeing 100 heads in a row doesn’t mean anything much.
I see the point you’re making about observation selection effects but surely in this case it doesn’t flatten the posterior very much. Of all the times you see a coin come up heads 100 times in a row, most of them will be for coins with p(heads) close to 1, even if you are discarding all other runs. That’s assuming you select coins independently for each run.
Perhaps—obviously each coin is flipped just once, i.e. Binomial(n=1,p), which is the same thing as Bernoulli(p). I was trying to point out that for any other n it would work the same as a normal coin, if someone were to keep flipping it.
You take the evidence, and you decide that you pay X. Then we run it lots of times. You pay X, I pick a random coin and flip it. I pay your winnings. You pay X again, I pick again, etc. X is fixed.
Your memory could be wiped of just the throw that you were asked to bet on, so that the 100 throws of heads do not have to be repeated. Equivalently, you could place all bets before any of them are evaluated.
This may be tangential, but how do you run this experiment lots of times? If you just abort the runs where you get any tails among the initial 100 throws, then I think seeing 100 heads in a row doesn’t mean anything much.
I see the point you’re making about observation selection effects but surely in this case it doesn’t flatten the posterior very much. Of all the times you see a coin come up heads 100 times in a row, most of them will be for coins with p(heads) close to 1, even if you are discarding all other runs. That’s assuming you select coins independently for each run.
Hmm perhaps I mis-read the post. I was assuming he was picking a single coin and flipping it 100 times.
The description of the coin flips having a Binomial(n=?,p) distribution, instead of a Bernoulli(p) distribution, might be a cause of the mis-reading.
Perhaps—obviously each coin is flipped just once, i.e. Binomial(n=1,p), which is the same thing as Bernoulli(p). I was trying to point out that for any other n it would work the same as a normal coin, if someone were to keep flipping it.
You take the evidence, and you decide that you pay X. Then we run it lots of times. You pay X, I pick a random coin and flip it. I pay your winnings. You pay X again, I pick again, etc. X is fixed.
Your memory could be wiped of just the throw that you were asked to bet on, so that the 100 throws of heads do not have to be repeated. Equivalently, you could place all bets before any of them are evaluated.