Wait a minute—when the Bayesian says “I think the coin probably has a chance near 50% of being heads”, she’s using data from prior observations of coin flips to say that. Which means that the frequentist might get the same answer if he added those prior observations to his dataset.
You can dismiss this objection by replacing the coin with a novel experimental test with an easily computed expected probability of success– say, the very first test of spin-up vs. spin-down for silver atoms.
Frequentists can’t claim relevant data sets for every experiment that has an obvious prior, without engaging in their own form of reference class tennis.
Wait a minute—when the Bayesian says “I think the coin probably has a chance near 50% of being heads”, she’s using data from prior observations of coin flips to say that. Which means that the frequentist might get the same answer if he added those prior observations to his dataset.
Yes, that’s a good point. Tthat would be considered using a data augmentation prior (Sander Greenland has advocated such an approach).
You can dismiss this objection by replacing the coin with a novel experimental test with an easily computed expected probability of success– say, the very first test of spin-up vs. spin-down for silver atoms.
Frequentists can’t claim relevant data sets for every experiment that has an obvious prior, without engaging in their own form of reference class tennis.
How can they have an obvious prior without an obvious relevant data set?