Here’s how that goes. I flip 3 coins. Say I get 2 heads. My probability estimate for “there are 4+ heads total” is now 4⁄8 (the probability that 2 or 3 of your coins are heads). For the full set of outcomes I can have, the options are: (0H, 0⁄8) (1H, 1⁄8) (2H, 4⁄8) (3H, 7⁄8). You perform the same reasoning. Then we each share our probability estimates with the other. Say that on the first round, we each share estimates of 50%. Then we can each deduce that the other saw exactly two heads, and on the second round (and forever after) both our estimates become 100%. For all possible outcomes, my first round probability tells you exactly how many heads I flipped, and vice versa; as soon as we share probabilities once, we both know the answer and agree.
(Also, you’re not using “confidence interval” in the correct manner. A confidence interval is defined over an expectation, not a posterior probability.)
I still don’t see any version of this that’s simpler than Finney’s that actually makes use of multiple rounds, and when I fix the math on Finney’s version it’s decidedly not simple.
My version of making this work would be choosing to only share limited information.
i.e. estimates of 33% heads. or estimates of >10% heads and >80% tails. Where they don’t sum to 100%, and will be harder to work out the “unknown space” in the middle. Limiting the prediction set to partial information. Also playing with multiple people should make it more complicated. Also an optional number of coin flips (optional to the person flipping coins and unknown to others within parameters)
Here’s how that goes. I flip 3 coins. Say I get 2 heads. My probability estimate for “there are 4+ heads total” is now 4⁄8 (the probability that 2 or 3 of your coins are heads). For the full set of outcomes I can have, the options are: (0H, 0⁄8) (1H, 1⁄8) (2H, 4⁄8) (3H, 7⁄8). You perform the same reasoning. Then we each share our probability estimates with the other. Say that on the first round, we each share estimates of 50%. Then we can each deduce that the other saw exactly two heads, and on the second round (and forever after) both our estimates become 100%. For all possible outcomes, my first round probability tells you exactly how many heads I flipped, and vice versa; as soon as we share probabilities once, we both know the answer and agree.
(Also, you’re not using “confidence interval” in the correct manner. A confidence interval is defined over an expectation, not a posterior probability.)
I still don’t see any version of this that’s simpler than Finney’s that actually makes use of multiple rounds, and when I fix the math on Finney’s version it’s decidedly not simple.
My version of making this work would be choosing to only share limited information.
i.e. estimates of 33% heads. or estimates of >10% heads and >80% tails. Where they don’t sum to 100%, and will be harder to work out the “unknown space” in the middle. Limiting the prediction set to partial information. Also playing with multiple people should make it more complicated. Also an optional number of coin flips (optional to the person flipping coins and unknown to others within parameters)