But although bayesianism makes the notion of knowledge less binary, it still relies too much on a binary notion of truth and falsehood. To elaborate, let’s focus on philosophy of science for a bit. Could someone give me a probability estimate that Darwin’s theory of evolution is true?
What do you mean by that question? Because the way I understand it, then the probability is “zero”. The probability that, in the vast hypotheses space, Darwin’s theory of evolution is the one that’s true, and not a slightly modified variant, is completely negligible. My main problem is: “is theory X true?” is usually a question which does not carry any meaning. You can’t answer that question in a vacuum without specifying against which other theories you’re “testing” it (or here, asking the question).
If I understand correctly, what you’re saying with the “97% of being 97% true” is that the probability that the true theory is within some bounds in the hypotheses space, which correspond to the property that inside those bounds the theories share 97% of the properties of “Darwin’s point” (whatever that may mean), is 97%. Am I understanding this correctly?
Regarding the stopping rule issue, it really depends how you decide the stopping. I believe sequential inference lets you do that without any problem but it’s not the same as saying that the p-value is within the wanted bounds. But basically all of this derives from working with p-values instead of workable values like log-odds. The other problem of p-values is that it only lets you work with binary hypotheses and makes you believe that writing things like P(H0) actually carry a meaning, when in reality you can’t test an hypothesis in a vacuum, you have to test it against an other hypothesis (unless once again it’s binary of course).
An other common mistake you did not talk about is one done in many meta-analyses: one aggregates the data of several studies without checking if the data are logically independent.