Eliezer’s example on Bayesian statistics is wr… oops!

This post was going to be an explanation of how an example Eliezer Yudkowsky frequently uses in discussion of statistics doesn’t actually imply what he thinks it does. In the process of proving his mistake, I found out I was actually wrong. I’m still writing the post up, and it might end up interesting to someone else making the same mistake as me.

The example in question is one that Eliezer often uses when arguing that Bayesian statistics are superior to frequentist ones. Here in the Sequences, here on Arbital, here in Glowfic, and a couple other times I can’t find at the moment. A coin is flipped six times; it comes up heads the first five times, and tails the sixth. But we don’t know whether the experimenter had A) decided beforehand to flip the coin six times, and reported what happened, or B) decided beforehand to flip the coin over and over until it came up tails, and report however long that took. What information does this give us about the coin’s bias, or lack thereof?

According to a frequentist perspective, there’s a serious difference in the p-values one would assign in these two different scenarios. In the first case, the result HHHHHT is placed into the class of results that are at least that extreme relative to the “null hypothesis”—of which there are 14: HHHHHH, TTTTTT, HHHHHT, TTTTTH, HHHHTH, TTTTHT, and so on. With 64 possibilities in total, 1464 = 0.22, which is far above the p=0.05 level and therefore not enough to conclude significance. In the second case, the result HHHHHT is instead placed into a different class of results: 5 heads and then a tail, 6 heads and then a tail, 7 heads and then a tail, and so on forever. The probability of that entire class is 132 in total, which is 0.03 - statistically significant!

Eliezer criticizes this in several ways. To start out with, the part where a frequentist decides to lump in the actual result with a group of results that are similar is subjective enough to allow for a lot of freedom in what the actual result is. Maybe in the first case, instead of choosing the class of results with 5 or more of the same side, you only choose the class of results with 5 or more heads in particular, thus halving the p=0.22 to p=0.11. He also criticizes the very notion of significance being determined by “rejecting the null hypothesis” rather than looking at different theoretical effect sizes and how well they would have predicted the data. Two experiments that are evidence supporting entirely different effect sizes are treated as both “rejecting the null hypothesis” and thus evidence towards the same theory, even if the results are inconsistent with each other.


All of this criticism of frequentist statistics and p-values seemed to be correct. But the analysis of how a Bayesian would update was different.

...a Bayesian looks at the experimental result and says, “I can now calculate the likelihood ratio (evidential flow) between all hypotheses under consideration. Since your state of mind doesn’t affect the coin in any way—doesn’t change the probability of a fair coin or biased coin producing this exact data—there’s no way your private, unobservable state of mind can affect my interpretation of your experimental results.”

If you’re used to Bayesian methods, it may seem difficult to even imagine that the statistical interpretation of the evidence ought to depend on a factor—namely the experimenter’s state of mind—which has no causal connection whatsoever to the experimental result. (Since Bayes says that evidence is about correlation, and no systematic correlation can appear without causal connection; evidence requires entanglement.)

So Eliezer is arguing that the likelihood ratios should obviously be the same in both scenarios, because the only relevant data is what sequence of flips the coin produced. The experimenter’s state of mind doesn’t change the probability that a coin of a given bias would produce this data, so it’s irrelevant.

But the key element Eliezer seems to be missing here is that the total sum of the data is not “The coin came up HHHHHT.” The data that we received is, instead, “The experimenter saw the coin come up HHHHHT.” And that is the sort of evidence that is causally entangled with the experimenter’s state of mind, because the experimenter’s state of mind determines in which cases the experimenter will ever see the coin come up HHHHHT. If the real fact of the matter was that the coin really was fair, for example, the experimenter’s state of mind following the “flip until you get tails” rule causes it to be less likely that the experimenter will ever get to the point of having a sixth flip in the first place, because there is now a 3132 chance the experimenter would stop before flip six. Evidence plus the knowledge that the evidence is filtered can often have different properties than the unquoted evidence would have on its own; the experimenter’s method of deciding which evidence to search for changes which evidence they are likely to find.

The two possibilities produce very different prior distributions over possible outcomes. Assuming that the actual bias of the coin is such that the theoretical frequency of heads is f: then in the “flip n times” case, the prior is distributed between all possible sequences of length n, with each one having a probability of f^(number of total heads in the sequence) * (1-f)^(number of tails in the sequence). (In the case where f = 12 and n = 6, this simply reduces to () for each sequence.) Meanwhile, in the “flip until you get tails” case, the prior is a 1-f chance of T, f(1-f) chance of HT, (1-f) chance of HHT, (1-f) chance of HHHT, and so on, always ending up at for a sequence of total length l. These priors both assign very different probabilities to different outcomes—in fact, there are many outcomes permitted by versions of the first that have zero probability on the second (like HTHTHTHT) or permitted by the second but with zero probability on low-n versions of the first (like HHHHHHHHHHHHT if n=6.)


So the probabilities over what happens during the experiment are very different, depending on whether you’re flipping n times or flipping until you get tails. Do those different conditional priors mean that the final likelihood ratios would end up being different, contrary to what Eliezer claimed? That’s what I thought.

After all, the way that Bayesian updating works in the first place is that you determine what the probabilities assigned to the experimental result were under different hypotheses (possible values of f), construct a likelihood distribution of what those probabilities were as a function of f, and multiply that by your prior distribution to update it. So the fact that the prior distributions conditional on the two types of experiments were so different would cause the things you were multiplying by to be different, and therefore give you different results.

(Feel free to pause and look for my error here, if you haven’t found it yet.)


The error I made was that I was confusing the difference in the experimental designs’ probability distributions over experimental results, with a difference in the likelihood distribution over hypotheses about the coin that the experiment would cause you to update towards. It is the case that the different experimental designs cause some experimental results to occur at different frequencies, but that does not automatically imply that the final update about the coin’s bias will be different.

Whether the update about the coin is different depends only on the probabilities assigned to the result that actually happened. It doesn’t matter if they assign wildly different probabilities to results like HHT and HTHTTH, if the experiment turns up HHHHHT and they assign the same probability to that. Which is, in fact, the case. In fact, while the distributions of experimental results look quite different at most values, they will always happen to cross at the exact value that actually turns out to be the result of the experiment, by some amazing not-really-a-coincidence.

The reason this happens is because, while the “flip until you get tails” is truly a constant probability distribution for any given value of f, the “flip n times” also depends on the value of n, making it really n separate distributions that happen to be similar to each other. If it takes 7 flips rather than 6 to get tails in the “flip until you get tails” experiment, that doesn’t mean that you suddenly have gotten a result that would have been impossible on any “flip n times” distribution, it just means that you move to the “flip 7 times” distribution rather than “flip 6 times”, and the probability of HHHHHHT on that distribution will end up matching the probability as assigned by the “flip until you get tails” distribution.

(The algebra here is simple enough. As said before, the probability of getting a given sequence for “flip n times” is f^(number of total heads in the sequence) * (1-f)^(number of tails in the sequence). But assuming that the sequence is one in which every value is heads, except for the last, which is tails, this reduces to . This is identical to the previously described function for “flip until you get tails.”)

See here for what the likelihood distribution over the coin’s bias f really does look like after seeing n-1 heads and 1 tails, regardless of which of the two experimental designs is used.


So why am I writing this post, if I turned out to be wrong as a simple question of fact? There are a few things I sure learned here, and it seems possible someone else is also confused and able to learn something from them too.

The first lesson is just to be more careful about checking what precisely a probability distribution is telling you! In my initial calculations, I made a lot of mistakes before I could even start to be sure about what it was that was confusing me, several of which I haven’t even mentioned here (like initially modelling something using a binomial distribution that really wasn’t applicable there.) Most of these mistakes were of the nature of “I’m looking for a probability distribution over something something conditional on x; this thing here is a probability distribution over something something conditional on x; therefore it is the distribution I’m looking for.” There’s a difference between the distribution over experimental results given a particular type of experiment, and the likelihood ratio over hypotheses given the observation of a particular result; there’s a difference between any particular version of a distribution dependent on one of the function’s parameters, and the overall class of distributions formed from all possible values of that parameter.

The second thing I learned is that Bayesian likelihood ratios really do only depend on the probability each hypothesis assigned only to the information that you received, and nothing else. Which I verbally knew before, but I hadn’t truly internalized. If two hypotheses assign the same probability to an outcome, and you see that outcome, that tells you nothing about any difference between the hypotheses. If I had ignored trying to quantify over all the possible outcomes, and just asked the comparatively simpler question of “what is the chance of HHHHHT in experiment 1, and in experiment 2,” I probably could have solved it a lot more quickly.

And then there’s also a possible lesson for me to learn of “see, you really should meta-level trust the reasoning of Eliezer Yudkowsky and other people who have more expertise in a given mathematical domain.” I am not sure this is a good lesson to learn. And I’m also not sure that Eliezer actually saw all of the reasoning I went through in this post about why the two experiments assign the same probabilities to the actual result, rather than just guessing and happening to be correct. That being said, it still is the case that I would have previously given this as an example of a situation in which Eliezer Yudkowsky was wrong about basic probability theory (and I also would have said something like “and he probably made this mistake because of motivated reasoning in order to score points against frequentists”). And he turned out to be right all along. This is more likely in worlds where he knows his stuff more, and I have correspondingly updated my beliefs.

(I hope this goes without saying, but I’ll say it anyway: a meta-level update towards trusting experts’ math, does not mean first-order conforming to their opinions if you don’t first-order agree with or understand them. I’ll still keep trying to notice and point out when it looks like Eliezer is wrong about something—even if I might not bet as strongly that he really does turn out to be wrong.)