IME that’s the case in a sizeable fraction of disagreements between humans; but if they “let reality be [their] final arbiter” they ought to realize that in the process.
Perhaps, but it is rather unlikely that they are equally wrong. It is far more likely that one will be less wrong than the other. Indeed, improving on our knowledge by the comparison between such fractions of correctness would seem to be the whole point of Bayesian rationality.
Making the (flawed) assumption that in a disagreement, they cannot both be wrong.
Also, they could be wrong about whether they actually disagree.
IME that’s the case in a sizeable fraction of disagreements between humans; but if they “let reality be [their] final arbiter” they ought to realize that in the process.
I have also heard it quoted like this.
Perhaps, but it is rather unlikely that they are equally wrong. It is far more likely that one will be less wrong than the other. Indeed, improving on our knowledge by the comparison between such fractions of correctness would seem to be the whole point of Bayesian rationality.