I’m new here and just going through the sequences (though I have a mathematics background), but I have yet to see a good framing of bayesian/frequentist debate as maximum likelihood vs maximum a-posteriori. (I welcome referrals)
I’m definitely not representative of lesswrong in my my views above I don’t think. In fact in some sense I think I’m shadowboxing with lesswrong in some of my comments above, so sorry about any confusion that introduced.
I don’t think I’ve ever seen maximum likelihood vs maximum a-posteriori discussed on lesswrong, and I’m kind of just griping about it! I don’t have a references off to top of my head but I recall this appearing in debates elsewhere (i.e. not on lesswrong) like in more academic/stats settings. I can see if I can find examples. But in general it addresses an estimation perspective instead of hypothesis testing.
Yes, there is a methodological critique to strict p-value calculations, but in the absence of informative priors p values are a really good indicator for experiment design. I feel that in hyping up Bayesian updates people are missing that and not offering a replacement. The focus on methods is a strength when you are talking about methods.
I think I’m in agreement with you here. My “methodological” was directed at what I view as a somewhat more typical lesswrong perspective, similar to what is expressed in the Eliezer quote. Sure, if we take some simple case we can address a more philosophical question about frequentism vs bayesianism, but in practical situations there are going to so many analytical choices that you could make that there are always going to be issues. In an actual analysis you can always do stuff like look at multiple versions of an analysis and trying to use that to refine your understanding of a phenomenon. If you fix the likelihood but allow the data to vary then p-values are likely to be highly correlated with possible alternatives like bayes factors, a lot of the critiques I feel are focused on making a clean philosophical approach while ignoring the inherent messiness that would be introduced if you ever want to infer things from reasonably complicated data or observations. I don’t think swapping likelihood ratios for p-values would sudden change things all that much, a lot of the core difficulties of inferring things from data would remain.
I’m definitely not representative of lesswrong in my my views above I don’t think. In fact in some sense I think I’m shadowboxing with lesswrong in some of my comments above, so sorry about any confusion that introduced.
I don’t think I’ve ever seen maximum likelihood vs maximum a-posteriori discussed on lesswrong, and I’m kind of just griping about it! I don’t have a references off to top of my head but I recall this appearing in debates elsewhere (i.e. not on lesswrong) like in more academic/stats settings. I can see if I can find examples. But in general it addresses an estimation perspective instead of hypothesis testing.
I think I’m in agreement with you here. My “methodological” was directed at what I view as a somewhat more typical lesswrong perspective, similar to what is expressed in the Eliezer quote. Sure, if we take some simple case we can address a more philosophical question about frequentism vs bayesianism, but in practical situations there are going to so many analytical choices that you could make that there are always going to be issues. In an actual analysis you can always do stuff like look at multiple versions of an analysis and trying to use that to refine your understanding of a phenomenon. If you fix the likelihood but allow the data to vary then p-values are likely to be highly correlated with possible alternatives like bayes factors, a lot of the critiques I feel are focused on making a clean philosophical approach while ignoring the inherent messiness that would be introduced if you ever want to infer things from reasonably complicated data or observations. I don’t think swapping likelihood ratios for p-values would sudden change things all that much, a lot of the core difficulties of inferring things from data would remain.