New paper on Bayesian philosophy of statistics from Andrew Gelman

Andrew Gelman recently linked a new article entitled “Induction and Deduction in Bayesian Data Analysis.” At his blog, he also described some of the comments made by reviewers and his rebuttle/​discussion to those comments. It is interesting that he departs significantly from the common induction-based view of Bayesian approaches. As a practitioner myself, I am happiest about the discussion on model checking—something one can definitely do in the Bayesian framework but which almost no one does. Model checking is to Bayesian data analysis as unit testing is to software engineering.

Added 03/​11/​12
Gelman has a new blog post today discussing another reaction to his paper and giving some additional details. Notably:

The basic idea of posterior predictive checking is, as they say, breathtakingly simple: (a) graph your data, (b) fit your model to data, (c) simulate replicated data (a Bayesian can always do this, because Bayesian models are always “generative”), (d) graph the replicated data, and (e) compare the graphs in (a) and (d). It makes me want to scream scream scream scream scream when statisticians’ philosophical scruples stop them from performing these five simple steps (or, to be precise, performing the simple steps (a), (c), (d), and (e), given that they’ve already done the hard part, which is step (b)).