Yes, the key issue is not so much whether on a first analysis you came to think those other folks are not as well informed as you, but whether you would have thought that if you had been taught by them. The issue is how to overcome the numerous easy habits of assuming that what you were taught must have been better. Once you see that on a simple first analysis you would each think the other less informed, you must realize that the problem is harder than you had realized and you need to re-evaluate your reasons for so easily thinking they are wrong and you are right. Until you can find a style of analysis that would have convinced you, had you grown up among them, to convert to this side, it is hard to believe you’ve overcome this bias.
The issue is how to overcome the numerous easy habits of assuming that what you were taught must have been better.
Well, that’s one issue. But I was addressing a different—more theoretical—issue, namely, whether acknowledging the contingency of one’s beliefs (i.e. that one would have believed differently if raised differently) necessarily undermines epistemic justification.
(Recall the distinction between third-personal ‘accounts’ of rational justification and first-personal ‘instruction manuals’.)
“Necessarily” is an extremely strong claim, making it overwhelming likely that such a claim is false. So why ever would that be an interesting issue? And to me, first-person instruction manuals seem obviously more important than third-person “accounts”.
I get the impression that many (even most) of the commenters here think that acknowledged contingency thereby undermines a belief. But if you agree with me that this is much too quick, then we face the interesting problem of specifying exactly when acknowledged contingency undermines justification.
I don’t know what you mean by “important”. I would agree that the instruction manual question is obviously of greater practical importance, e.g. for those whose interest in the theory of rationality is merely instrumental. But to come up with an account of epistemic justification seems of equal or greater theoretical importance, to philosophers and others who have an intrinsic interest in the topic.
It’s also worth noting that the theoretical task could help inform the practical one. For example, the post on ‘skepticism and default trust’ (linked in my original comment) argues that some self-acknowledged ‘epistemic luck’ is necessary to avoid radical skepticism. This suggests a practical conclusion: if you hope to acquire any knowledge at all, your instruction manual will need to avoid being too averse to this outcome.
The vast majority of claims people make in ordinary language are best interpreted as on-average-tendency or all-else-equal claims; it almost never makes sense to interpret them as logical necessities. Why should this particular case be any different?
Yes, the key issue is not so much whether on a first analysis you came to think those other folks are not as well informed as you, but whether you would have thought that if you had been taught by them. The issue is how to overcome the numerous easy habits of assuming that what you were taught must have been better. Once you see that on a simple first analysis you would each think the other less informed, you must realize that the problem is harder than you had realized and you need to re-evaluate your reasons for so easily thinking they are wrong and you are right. Until you can find a style of analysis that would have convinced you, had you grown up among them, to convert to this side, it is hard to believe you’ve overcome this bias.
Robin Hanson just ended a post with the phrase “overcome bias.” This feels momentous, like theme music should be playing.
May I suggest the following?
http://www.youtube.com/watch?v=cSZ55X3X4pk
http://tvtropes.org/pmwiki/pmwiki.php/Main/TitleDrop
Well, that’s one issue. But I was addressing a different—more theoretical—issue, namely, whether acknowledging the contingency of one’s beliefs (i.e. that one would have believed differently if raised differently) necessarily undermines epistemic justification.
(Recall the distinction between third-personal ‘accounts’ of rational justification and first-personal ‘instruction manuals’.)
“Necessarily” is an extremely strong claim, making it overwhelming likely that such a claim is false. So why ever would that be an interesting issue? And to me, first-person instruction manuals seem obviously more important than third-person “accounts”.
I get the impression that many (even most) of the commenters here think that acknowledged contingency thereby undermines a belief. But if you agree with me that this is much too quick, then we face the interesting problem of specifying exactly when acknowledged contingency undermines justification.
I don’t know what you mean by “important”. I would agree that the instruction manual question is obviously of greater practical importance, e.g. for those whose interest in the theory of rationality is merely instrumental. But to come up with an account of epistemic justification seems of equal or greater theoretical importance, to philosophers and others who have an intrinsic interest in the topic.
It’s also worth noting that the theoretical task could help inform the practical one. For example, the post on ‘skepticism and default trust’ (linked in my original comment) argues that some self-acknowledged ‘epistemic luck’ is necessary to avoid radical skepticism. This suggests a practical conclusion: if you hope to acquire any knowledge at all, your instruction manual will need to avoid being too averse to this outcome.
The vast majority of claims people make in ordinary language are best interpreted as on-average-tendency or all-else-equal claims; it almost never makes sense to interpret them as logical necessities. Why should this particular case be any different?