If/when I point to empirical evidence that practising using bayes’ theorem does in fact help your meta-rationality, my model of Pat Modesto says “Oh, so you claim that you have ‘empirical’ evidence and this means you know ‘better’ than others. Many people thought they too had ‘special’ evidence that allowed them to have ‘different’ beliefs.” Pssh.
In general I agree with your post, and while Pat’s is an argument I could imagine someone saying to me, I don’t let this overwrite my models. If I think that person X has good meta-rationality, and you suggest my evidence is bad according to one particular outsid view, I will not throw away my models, but keep them while I examine this argument. If the argument is compelling I’ll update, but the same heuristic that keeps me from preventing bucket errors will also stop me from immediatley saying anything like “Yes, you’re probably right that I don’t really have evidence of X’s meta-rationality being strong”.
If/when I point to empirical evidence that practising using bayes’ theorem does in fact help your meta-rationality, my model of Pat Modesto says “Oh, so you claim that you have ‘empirical’ evidence and this means you know ‘better’ than others. Many people thought they too had ‘special’ evidence that allowed them to have ‘different’ beliefs.” Pssh.
In general I agree with your post, and while Pat’s is an argument I could imagine someone saying to me, I don’t let this overwrite my models. If I think that person X has good meta-rationality, and you suggest my evidence is bad according to one particular outsid view, I will not throw away my models, but keep them while I examine this argument. If the argument is compelling I’ll update, but the same heuristic that keeps me from preventing bucket errors will also stop me from immediatley saying anything like “Yes, you’re probably right that I don’t really have evidence of X’s meta-rationality being strong”.