You give as an example a situation which is inherently not repeatable, where were forced to make do with reasoning under significant uncertainty and with very limited information, to decide what’s going on out of an incredibly wide hypothesis space. You correctly point out this is hard.
You then say that in a situation where we can perform repeated experiments to exclude one hypothesis P-values work ok.
But in that exact situation Bayesian reasoning works fine. Sure you might not agree on which alternative hypothesis is true but so long as both of you agree there are any alternative hypothesese that make it more likely to see the given results, after a few rounds you’ll have extremely low credence in the original hypothesis.
Bayesian reasoning does work fine here, but if you were trying to communicate how you changed your mind about the original hypothesis, you wouldn’t report all your updates, because you wouldn’t (and shouldn’t) go through the process of enumerating all of the alternatives you considered and the likelihoods under those alternatives and your priors and justifying an estimate of the likelihood under alternatives like “or something I haven’t thought of”. If you’re interested in a distinguished hypothesis, which you almost always are (hypotheses like “this intervention has no effect” or “the normal explanation of how this process works is correct” are basically always available), then the most important thing you should report is the probability of the evidence under that distinguished hypothesis, since the updates on that hypothesis should agree even if the auxiliary updates do not and that’s the hypothesis your peers will tend to care about the most.
You give as an example a situation which is inherently not repeatable, where were forced to make do with reasoning under significant uncertainty and with very limited information, to decide what’s going on out of an incredibly wide hypothesis space. You correctly point out this is hard.
You then say that in a situation where we can perform repeated experiments to exclude one hypothesis P-values work ok.
But in that exact situation Bayesian reasoning works fine. Sure you might not agree on which alternative hypothesis is true but so long as both of you agree there are any alternative hypothesese that make it more likely to see the given results, after a few rounds you’ll have extremely low credence in the original hypothesis.
Bayesian reasoning does work fine here, but if you were trying to communicate how you changed your mind about the original hypothesis, you wouldn’t report all your updates, because you wouldn’t (and shouldn’t) go through the process of enumerating all of the alternatives you considered and the likelihoods under those alternatives and your priors and justifying an estimate of the likelihood under alternatives like “or something I haven’t thought of”. If you’re interested in a distinguished hypothesis, which you almost always are (hypotheses like “this intervention has no effect” or “the normal explanation of how this process works is correct” are basically always available), then the most important thing you should report is the probability of the evidence under that distinguished hypothesis, since the updates on that hypothesis should agree even if the auxiliary updates do not and that’s the hypothesis your peers will tend to care about the most.