Personally, I flinch whenever I get to the “accursèd frequentists” line. But beyond that I think it does a decent job of arguing that Bayesians win the philosophy of statistics battle, even if they don’t generate the best tools for any particular application. And so it seems to me that in ML or stats, where the hunt is mostly for good tools instead of good laws, having the right philosophy is only a bit of a help, and can be a hindrance if you don’t take the ‘our actual tools are generally approximations’ part seriously.
In this particular example, it seems to me that ChrisHallquist has a philosophical difference with his stats professor, and so her not being Bayesian is potentially meaningful. I think that LW should tell statisticians that they shouldn’t believe cell phones cause cancer, even if they shouldn’t tell them what sort of conditional independence tests to use when they’re running PC on a continuous dataset.
But it seems to me that Bayesians like to make ‘average case’ statements based on their posterior, and frequentists like to make ‘worst case’ statements using their intervals. In complexity theory average and worst case analysis seem to get along just fine. Why can’t they get along here in probability land?
I find the philosophical question ‘what is probability?’ very boring.
Unrelated comment : the issue does not arise with PC, because PC learns fully observable DAG models, for which we can write down the likelihood just fine even in the continuous case. So if you want to be Bayesian w/ DAGs, you can run your favorite search and score method. The problem arises when you get an independence model like this one:
{ p(a,b,c,d) | A marginally independent of B, C marginally independent of D (and no other independences hold) }
which does not correspond to any fully observable DAG, and you don’t think your continuous-valued data is multivariate normal. I don’t think anyone knows how to write down the likelihood for this model in general.
Why can’t they get along here in probability land?
Agreed.
the issue does not arise with PC, because PC learns fully observable DAG models, for which we can write down the likelihood just fine even in the continuous case.
Correct; I am still new to throwing causality discovery algorithms at datasets and so have not developed strong mental separations between them yet. Hopefully I’ll stop making rookie mistakes like that soon (and thanks for pointing it out!).
While I’m not Ilya, I find the ‘beautiful probability’ discussion somewhat frustrating.
Sure, if we test different hypotheses with the same low sample data, we can get different results. However, starting from different priors, we can also get different results with that same data. Bayesianism won’t let you escape the problem, which is ultimately a problem of data volume.
LW (including myself) is very influenced by ET Jaynes, who believed that for every state of knowledge, there’s a single probability distribution that represents it. Therefore, you’d only get different results from the same data if you started with different knowledge.
It makes a lot of sense for your conclusions to depend on your knowledge. It’s not a problem.
Finding the prior that represents your knowledge is a problem, though.
I’ve read Jaynes (I used to spend long hours trying to explain to a true-believer why I thought MaxEnt was a bad approach to out-of-equilibrium thermo), but my point is that for small sample data, assumptions will (of course) matter. For our frequentist, this means that the experimental specification will lead to small changes in confidence intervals. For the Bayesian this means that the choice of the prior will lead to small changes in credible intervals.
Neither is wrong, and neither is “the one true path”- they are different, equally useful approaches to the same problem.
″ < Jaynes quote > … If Nature is one way, the likelihood of the data coming out the way we have seen will be one thing. If Nature is another way, the likelihood of the data coming out that way will be something else. But the likelihood of a given state of Nature producing the data we have seen, has nothing to do with the researcher’s private intentions. So whatever our hypotheses about Nature, the likelihood ratio is the same, and the evidential impact is the same, and the posterior belief should be the same, between the two experiments. At least one of the two Old Style methods must discard relevant information—or simply do the wrong calculation—for the two methods to arrive at different answers.”
This seems to be wrong. EY makes a sort of dualistic distinction between “Nature” (with a capital “N”) and the researcher’s mental state. But what EY (and possibly Jaynes, though I can’t tell from a short quote) is missing is that the researcher’s mental state is part of Nature, and in particular is part of the stochastic processes that generate the data for these two different experimental settings. Therefore, any correct inference technique, frequentist or Bayesian, must treat the two scenarios differently.
The point that EY is making there is kind of subtle. Think about it this way:
There’s a hidden double selected uniformly at random that’s between 0 and 1. You can’t see what it is; you can only press a button to see a 1 if another randomly selected double (over the same range) is higher than it, or 0 if the new double is less than or equal to it.
One researcher says “I’m going to press this button 100 times, and then estimate what the hidden double is.” The second research says “I’m going to press this button until my estimate of the double is at most .4.” Coincidentally, they see the exact same sequence of 100 presses, with 70 1s.
The primary claim is that the likelihood ratio from seeing 70 1s and 30 0s is the same for both researchers, and this seems correct to me. (How can the researcher’s intention change the hidden double?) The secondary claim is that the second researcher receives no additional information from the potentially surprising fact that he required 100 presses under his decision procedure. I have not put enough thought into it to determine whether or not the secondary claim is correct, but it seems likely to me that it is.
Split the researchers that generate the data from the reasoner who is trying to estimate the hidden double from the data.
What is the data that the estimator receives? There is clearly a string of 100 bits indicating the results of the comparisons, but there is also another datum which indicates that the experiment was stopped after 100 iterations. This is a piece of evidence which must be included in the model, and the way to include it depends on the estimator’s knowledge of the stopping criterion used by the data generator.
The estimator has to take into account the possibility of cherry picking.
EDIT:
I think I can use an example:
Suppose that I give you N =~ 10^9 bits of data generated according to the process you describe, and I declare that I had precommitted to stop gathering data after exactly N bits. If you trust me, then you must believe that you have an extremely accurate estimate of the hidden double. After all, you are using 1 gigabit of data to estimate less than 64 bits of entropy!
But then you learn that I lied about the stopping criterion, and I had in fact precommitted to stop gathering data at the point that it would have fooled you into believing with very high probability that the hidden number was, say, 0.42.
Should you update your belief on the hidden double after hearing of my deception? Obviously you should. In fact, the observation that I gave you so much data now makes the estimate extremely suspect, since the more data I give you the more I can manipulate your estimate.
So, suppose I know the stopping criterion and the number of button presses that it took to stop the sequence, but I wasn’t given the actual sequence.
It seems to me like I can use the two of those to recreate the sequence, for a broad class of stopping criteria. “If it took 100 presses, then clearly it must be 70 1s and 30 0s, because if it had been 71 1s and 29 0s he would have stopped then and there would be only 99 presses, but he wouldn’t have stopped at 69 1s and 30 0s.” I don’t think I have any additional info.
Should you update your belief on the hidden double after hearing of my deception? Obviously you should.
Update it to what? Assuming that the data is not tampered with, just that your stopping criterion was pointed at a particular outcome, it seems like that unless the double is actually very close to 0.42 then you are very unlikely to ever stop!* It looks like the different stopping criteria impose conditions on the order of the dataset, but the order is independent of the process that generates whether each bit is a 1 or a 0, and thus should be independent of my estimate of the hidden double.
* If you imagine multiple researchers, each of which get different sequences, and I only hear from some of the researchers- then, yes, it seems like selection bias is a problem. But the specific scenario under consideration is two researchers with identical experimental results drawing different inferences from those results, which is different from two researchers with differing experimental setups having different distributions of possible results.
Different information about part of nature is not sufficient to change an inference—the probabilities could be independent of the researcher’s intentions.
The posterior probability of the observed data given the hidden variable of interest is in general not independent from the intentions of the researcher who is in charge of the data generation process.
Ilya, I’m curious what your thoughts on Beautiful Probability are.
Personally, I flinch whenever I get to the “accursèd frequentists” line. But beyond that I think it does a decent job of arguing that Bayesians win the philosophy of statistics battle, even if they don’t generate the best tools for any particular application. And so it seems to me that in ML or stats, where the hunt is mostly for good tools instead of good laws, having the right philosophy is only a bit of a help, and can be a hindrance if you don’t take the ‘our actual tools are generally approximations’ part seriously.
In this particular example, it seems to me that ChrisHallquist has a philosophical difference with his stats professor, and so her not being Bayesian is potentially meaningful. I think that LW should tell statisticians that they shouldn’t believe cell phones cause cancer, even if they shouldn’t tell them what sort of conditional independence tests to use when they’re running PC on a continuous dataset.
Well, I am no Larry Wasserman.
But it seems to me that Bayesians like to make ‘average case’ statements based on their posterior, and frequentists like to make ‘worst case’ statements using their intervals. In complexity theory average and worst case analysis seem to get along just fine. Why can’t they get along here in probability land?
I find the philosophical question ‘what is probability?’ very boring.
Unrelated comment : the issue does not arise with PC, because PC learns fully observable DAG models, for which we can write down the likelihood just fine even in the continuous case. So if you want to be Bayesian w/ DAGs, you can run your favorite search and score method. The problem arises when you get an independence model like this one:
{ p(a,b,c,d) | A marginally independent of B, C marginally independent of D (and no other independences hold) }
which does not correspond to any fully observable DAG, and you don’t think your continuous-valued data is multivariate normal. I don’t think anyone knows how to write down the likelihood for this model in general.
Agreed.
Correct; I am still new to throwing causality discovery algorithms at datasets and so have not developed strong mental separations between them yet. Hopefully I’ll stop making rookie mistakes like that soon (and thanks for pointing it out!).
While I’m not Ilya, I find the ‘beautiful probability’ discussion somewhat frustrating.
Sure, if we test different hypotheses with the same low sample data, we can get different results. However, starting from different priors, we can also get different results with that same data. Bayesianism won’t let you escape the problem, which is ultimately a problem of data volume.
LW (including myself) is very influenced by ET Jaynes, who believed that for every state of knowledge, there’s a single probability distribution that represents it. Therefore, you’d only get different results from the same data if you started with different knowledge.
It makes a lot of sense for your conclusions to depend on your knowledge. It’s not a problem.
Finding the prior that represents your knowledge is a problem, though.
I’ve read Jaynes (I used to spend long hours trying to explain to a true-believer why I thought MaxEnt was a bad approach to out-of-equilibrium thermo), but my point is that for small sample data, assumptions will (of course) matter. For our frequentist, this means that the experimental specification will lead to small changes in confidence intervals. For the Bayesian this means that the choice of the prior will lead to small changes in credible intervals.
Neither is wrong, and neither is “the one true path”- they are different, equally useful approaches to the same problem.
″ < Jaynes quote > … If Nature is one way, the likelihood of the data coming out the way we have seen will be one thing. If Nature is another way, the likelihood of the data coming out that way will be something else. But the likelihood of a given state of Nature producing the data we have seen, has nothing to do with the researcher’s private intentions. So whatever our hypotheses about Nature, the likelihood ratio is the same, and the evidential impact is the same, and the posterior belief should be the same, between the two experiments. At least one of the two Old Style methods must discard relevant information—or simply do the wrong calculation—for the two methods to arrive at different answers.”
This seems to be wrong.
EY makes a sort of dualistic distinction between “Nature” (with a capital “N”) and the researcher’s mental state. But what EY (and possibly Jaynes, though I can’t tell from a short quote) is missing is that the researcher’s mental state is part of Nature, and in particular is part of the stochastic processes that generate the data for these two different experimental settings. Therefore, any correct inference technique, frequentist or Bayesian, must treat the two scenarios differently.
The point that EY is making there is kind of subtle. Think about it this way:
There’s a hidden double selected uniformly at random that’s between 0 and 1. You can’t see what it is; you can only press a button to see a 1 if another randomly selected double (over the same range) is higher than it, or 0 if the new double is less than or equal to it.
One researcher says “I’m going to press this button 100 times, and then estimate what the hidden double is.” The second research says “I’m going to press this button until my estimate of the double is at most .4.” Coincidentally, they see the exact same sequence of 100 presses, with 70 1s.
The primary claim is that the likelihood ratio from seeing 70 1s and 30 0s is the same for both researchers, and this seems correct to me. (How can the researcher’s intention change the hidden double?) The secondary claim is that the second researcher receives no additional information from the potentially surprising fact that he required 100 presses under his decision procedure. I have not put enough thought into it to determine whether or not the secondary claim is correct, but it seems likely to me that it is.
Split the researchers that generate the data from the reasoner who is trying to estimate the hidden double from the data.
What is the data that the estimator receives? There is clearly a string of 100 bits indicating the results of the comparisons, but there is also another datum which indicates that the experiment was stopped after 100 iterations. This is a piece of evidence which must be included in the model, and the way to include it depends on the estimator’s knowledge of the stopping criterion used by the data generator.
The estimator has to take into account the possibility of cherry picking.
EDIT:
I think I can use an example:
Suppose that I give you N =~ 10^9 bits of data generated according to the process you describe, and I declare that I had precommitted to stop gathering data after exactly N bits. If you trust me, then you must believe that you have an extremely accurate estimate of the hidden double. After all, you are using 1 gigabit of data to estimate less than 64 bits of entropy!
But then you learn that I lied about the stopping criterion, and I had in fact precommitted to stop gathering data at the point that it would have fooled you into believing with very high probability that the hidden number was, say, 0.42.
Should you update your belief on the hidden double after hearing of my deception? Obviously you should. In fact, the observation that I gave you so much data now makes the estimate extremely suspect, since the more data I give you the more I can manipulate your estimate.
So, suppose I know the stopping criterion and the number of button presses that it took to stop the sequence, but I wasn’t given the actual sequence.
It seems to me like I can use the two of those to recreate the sequence, for a broad class of stopping criteria. “If it took 100 presses, then clearly it must be 70 1s and 30 0s, because if it had been 71 1s and 29 0s he would have stopped then and there would be only 99 presses, but he wouldn’t have stopped at 69 1s and 30 0s.” I don’t think I have any additional info.
Update it to what? Assuming that the data is not tampered with, just that your stopping criterion was pointed at a particular outcome, it seems like that unless the double is actually very close to 0.42 then you are very unlikely to ever stop!* It looks like the different stopping criteria impose conditions on the order of the dataset, but the order is independent of the process that generates whether each bit is a 1 or a 0, and thus should be independent of my estimate of the hidden double.
* If you imagine multiple researchers, each of which get different sequences, and I only hear from some of the researchers- then, yes, it seems like selection bias is a problem. But the specific scenario under consideration is two researchers with identical experimental results drawing different inferences from those results, which is different from two researchers with differing experimental setups having different distributions of possible results.
Different information about part of nature is not sufficient to change an inference—the probabilities could be independent of the researcher’s intentions.
The posterior probability of the observed data given the hidden variable of interest is in general not independent from the intentions of the researcher who is in charge of the data generation process.