Another is recognizing when I’m privileging my own position… that is, when I believe X, but I’m aware that if the accidental circumstances of my life were different I would believe -X. That’s not always a problem—sometimes my position genuinely is superior! -- but it’s a warning sign.
I agree that something like this has to be an important warning sign. But how do I calibrate this sense so that it isn’t going off all the time (and thus no longer serving as evidence of anything?) Even if we limit it to situations where the “different circumstances” aren’t “having access to more or less relevant information, and the question at hand is non-normative, that’s still a vast space of disagreement that goes dingdingding! for almost any nontrivial question. I can of course notice when my opinions are atypical for, say, young college-educated white men, but even then it seems likely that some not-as-quantifiable aspect of my experiences, other than the relevant evidence I’ve gathered or could infer[1], may have led me to this. It’s certainly very easy to construct stories to this effect!
Perhaps demographic atypicality of opinions serves as Bayesian evidence that your views do just happen to be the result of (good) epistemic luck, and are therefore true?
[1] i.e., if I never read in detail the arguments and theories of those with very different ideologies than my own, I must assume that they’re at least somewhat convincing, as they’re convincing to someone. Preservation of expected evidence, &c.
Say I’m in circumstance X and believe Y, and you’re in X’ and believe Y’..
I can try to understand what evidence you have for Y’—that is, try to understand the subset of your experience of X’ that is relevant to believing Y’. I can then use that understanding as additional evidence that informs my estimates of the likelihood of Y and Y’.
Or I can decide that I don’t feel like putting that much effort into the question. Which is perfectly valid… as you say, this comes up all the time, and one has to prioritize.
In that second case, I can take various shortcuts.
Indeed, many such shortcuts are wired into my brain, and they aren’t necessarily bad ones—though of course experienced deceivers are used to subverting them. Others can be learned, either implicitly or explicitly. Some are so unreliable in the modern world I do best to _un_learn them.
What I try not to do is fool myself into thinking I’ve actually evaluated the situation when I’ve merely taken a shortcut. If I’m dismissing Y’ and reaffirming my belief in Y without doing the analysis, the alarm goes off to remind me that no, that’s not justified. Y is my current belief, and I’m choosing not to investigate Y’ because I’ve got better things to do, and that’s really all I can legitimately say.
I agree that something like this has to be an important warning sign. But how do I calibrate this sense so that it isn’t going off all the time (and thus no longer serving as evidence of anything?) Even if we limit it to situations where the “different circumstances” aren’t “having access to more or less relevant information, and the question at hand is non-normative, that’s still a vast space of disagreement that goes dingdingding! for almost any nontrivial question. I can of course notice when my opinions are atypical for, say, young college-educated white men, but even then it seems likely that some not-as-quantifiable aspect of my experiences, other than the relevant evidence I’ve gathered or could infer[1], may have led me to this. It’s certainly very easy to construct stories to this effect!
Perhaps demographic atypicality of opinions serves as Bayesian evidence that your views do just happen to be the result of (good) epistemic luck, and are therefore true?
[1] i.e., if I never read in detail the arguments and theories of those with very different ideologies than my own, I must assume that they’re at least somewhat convincing, as they’re convincing to someone. Preservation of expected evidence, &c.
Say I’m in circumstance X and believe Y, and you’re in X’ and believe Y’..
I can try to understand what evidence you have for Y’—that is, try to understand the subset of your experience of X’ that is relevant to believing Y’. I can then use that understanding as additional evidence that informs my estimates of the likelihood of Y and Y’.
Or I can decide that I don’t feel like putting that much effort into the question. Which is perfectly valid… as you say, this comes up all the time, and one has to prioritize.
In that second case, I can take various shortcuts.
Indeed, many such shortcuts are wired into my brain, and they aren’t necessarily bad ones—though of course experienced deceivers are used to subverting them. Others can be learned, either implicitly or explicitly. Some are so unreliable in the modern world I do best to _un_learn them.
What I try not to do is fool myself into thinking I’ve actually evaluated the situation when I’ve merely taken a shortcut. If I’m dismissing Y’ and reaffirming my belief in Y without doing the analysis, the alarm goes off to remind me that no, that’s not justified. Y is my current belief, and I’m choosing not to investigate Y’ because I’ve got better things to do, and that’s really all I can legitimately say.