Quoting Gelman himself on page 77 of the linked paper:
If you could really express your uncertainty as a prior distribution, then you could just as well observe data and directly write your subjective posterior distribution, and there would be no need for statistical analysis at all.
In full context of the paper, Gelman is noting this as a problem with standard Bayesian analysis. He doesn’t argue, as I’m arguing, that we’re trying to model our priors or the structure of our uncertainty, i.e. that we’re trying to approximate the fully Baysian answer.
After going back and re-reading this, I realized your comments are more prescient than I gave them credit for in the past. I’m now struggling with the Gelman-Shalizi article (link). Do you know of any LessWrong sources that discuss this. I need to really sit back and think, but it seems to me that Gelman and Shalizi are making some serious mistakes here. And they are two of the best practitioners I know of. That scares me a great deal.
I don’t know of any sources, short of an allusion or two in my comment history, but I don’t recommend digging for them. One point I think I’ve made in the past is that an implication of viewing statistics as a method of modeling and thus approximating our uncertainty is that Gelman’s posterior predictive checks have limits, though they’re still useful. If posterior predictive checking tells you some part of your model is wrong but you otherwise have good reason to believe that part is an accurate representation of your true uncertainty, it might still be a good idea to leave that part alone.
Quoting Gelman himself on page 77 of the linked paper:
In full context of the paper, Gelman is noting this as a problem with standard Bayesian analysis. He doesn’t argue, as I’m arguing, that we’re trying to model our priors or the structure of our uncertainty, i.e. that we’re trying to approximate the fully Baysian answer.
After going back and re-reading this, I realized your comments are more prescient than I gave them credit for in the past. I’m now struggling with the Gelman-Shalizi article (link). Do you know of any LessWrong sources that discuss this. I need to really sit back and think, but it seems to me that Gelman and Shalizi are making some serious mistakes here. And they are two of the best practitioners I know of. That scares me a great deal.
I don’t know of any sources, short of an allusion or two in my comment history, but I don’t recommend digging for them. One point I think I’ve made in the past is that an implication of viewing statistics as a method of modeling and thus approximating our uncertainty is that Gelman’s posterior predictive checks have limits, though they’re still useful. If posterior predictive checking tells you some part of your model is wrong but you otherwise have good reason to believe that part is an accurate representation of your true uncertainty, it might still be a good idea to leave that part alone.