It is a part of difficulty to subvert—it is difficult to arrange a scheme with positive expected utility for falsifying data.
Given that one gets fame for “spectacular” discoveries, not at all especially in fields like biology where there are frequently lots of confounding variables that you can use to provide cover.
That has always been the problem with experimental science, sometimes you can’t really protect from falsification.
Actually, thing is, given the list of biases, one shouldn’t trust one’s own rationality, let alone rationality of other people (If a rationalist trusts his own rationality while knowing of biases… that’s just a new kind of irrationalist). Other issue is that introduction of novel hypotheses with ‘correct priors’ allows to introduce a cherry picked selection of hypotheses that would lead to a new hypothesis with undue confidence that wouldn’t have existed if all possible hypotheses were considered. (i.e. you may want to introduce hypothesis A with undue confidence, you introduce hypotheses B,C,D,E,F… which would raise probability of A, but not G,H,I,J… which would lower probability of A). A fully rational even slightly selfish agent would do such a thing. It is insufficient to converge when all hypotheses are considered. One has to provide best approximation at any time. That pretty much makes most methods that sound great in abstract unbounded theory entirely inapplicable.
Also, BTW, science does trust your rationality and does trust your ability to set up a probabilistic argument. But it only does so when it makes sense for you to trust your probabilistic argument—when you are actually doing bulletproof math with no gaps where errors creep in.
Given that one gets fame for “spectacular” discoveries, not at all especially in fields like biology where there are frequently lots of confounding variables that you can use to provide cover.
That has always been the problem with experimental science, sometimes you can’t really protect from falsification.
Actually, thing is, given the list of biases, one shouldn’t trust one’s own rationality, let alone rationality of other people (If a rationalist trusts his own rationality while knowing of biases… that’s just a new kind of irrationalist). Other issue is that introduction of novel hypotheses with ‘correct priors’ allows to introduce a cherry picked selection of hypotheses that would lead to a new hypothesis with undue confidence that wouldn’t have existed if all possible hypotheses were considered. (i.e. you may want to introduce hypothesis A with undue confidence, you introduce hypotheses B,C,D,E,F… which would raise probability of A, but not G,H,I,J… which would lower probability of A). A fully rational even slightly selfish agent would do such a thing. It is insufficient to converge when all hypotheses are considered. One has to provide best approximation at any time. That pretty much makes most methods that sound great in abstract unbounded theory entirely inapplicable.
Also, BTW, science does trust your rationality and does trust your ability to set up a probabilistic argument. But it only does so when it makes sense for you to trust your probabilistic argument—when you are actually doing bulletproof math with no gaps where errors creep in.