Great post! It would be great if you had cites for various folks claiming myth k. Some of these sound unbelievable!
“Frequentist methods need to assume their model is correct.”
This one is hilarious. Does anyone say this? Multiply robust methods (Robins/Rotnitzky/et al) aren’t exactly Bayesian, and their entire point is that you can get a giant piece of the likelihood arbitrarily wrong and your estimator is still consistent.
Great post! It would be great if you had cites for various folks claiming myth k. Some of these sound unbelievable!
This one is hilarious. Does anyone say this? Multiply robust methods (Robins/Rotnitzky/et al) aren’t exactly Bayesian, and their entire point is that you can get a giant piece of the likelihood arbitrarily wrong and your estimator is still consistent.
I’ll add some here, maybe others can chime in as well. Some of these are quotes of my past self, is that fair?
1 & 2: When the assumptions of Bayes’ Theorem hold, and when Bayesian updating can be performed computationally efficiently, then it is indeed tautological that Bayes is the optimal approach
4: Share likelihood ratios, not posterior beliefs
5: They assume they have perfect information about experimental setups and likelihood ratios
6 & 7: Frequentist statistics are like Bayesian statistics with a default set of model-based priors provided, but hidden under a rug. The prior-hiding is bad, because it leaves broken mathematics that can’t be built upon to handle more complex cases. Unfortunately, “you can’t build on this to handle complex cases” is an extremely difficult argument to present convincingly, even when true; and by the time someone knows enough that talking about complex cases is feasible, they’re already locked in to one style or the other.
9: Case study: an abuse of frequentist statistics
I thought you might enjoy it :). Your comments on LessWrong provided me with some of the motivation for writing it.