I don’t know of any sources, short of an allusion or two in my comment history, but I don’t recommend digging for them. One point I think I’ve made in the past is that an implication of viewing statistics as a method of modeling and thus approximating our uncertainty is that Gelman’s posterior predictive checks have limits, though they’re still useful. If posterior predictive checking tells you some part of your model is wrong but you otherwise have good reason to believe that part is an accurate representation of your true uncertainty, it might still be a good idea to leave that part alone.
I don’t know of any sources, short of an allusion or two in my comment history, but I don’t recommend digging for them. One point I think I’ve made in the past is that an implication of viewing statistics as a method of modeling and thus approximating our uncertainty is that Gelman’s posterior predictive checks have limits, though they’re still useful. If posterior predictive checking tells you some part of your model is wrong but you otherwise have good reason to believe that part is an accurate representation of your true uncertainty, it might still be a good idea to leave that part alone.