Since the time of Quine et al., other philosophers[who?] have formulated at least four versions of the principle of charity. These alternatives may conflict with one another, so that charity becomes a matter of taste. The four principles are: The other uses words in the ordinary way; The other makes true statements; The other makes valid arguments; The other says something interesting.
I disagree with “charity becomes a matter of taste,” and think instead that one should always apply each version.
I disagree with “charity becomes a matter of taste,” and think instead that one should always apply each version.
I must disagree. Sometimes it really is more important to maintain an accurate model of reality. Enabling the other’s bullshit is not always virtuous or useful.
The principle of charity is a counter-bias for one’s (estimated amount of) motivated cognition, underestimation of inferential distance, typical mind fallacy, etc. This is towards an accurate model of reality about the person one is speaking with.
It is superseded as a way to find truth about the subject in contention, rather than about the other person, by LCPW.
It is also a guide for responding, which doesn’t mean you have to believe anything in particular, just as one can always guess “red” for the color of the next card if most are red and some are blue. For the social utility of it, sure sometimes it’s not always useful, but always is pretty extreme. It’s not always useful to be honest, or to not fake a seizure in a debate either.
The principle of charity is a counter-bias for one’s (estimated amount of) motivated cognition, underestimation of inferential distance, typical mind fallacy, etc. This is towards an accurate model of reality about the person one is speaking with.
No, basically it just isn’t. The principle of charity as used here and in general is not “be charitable to the extent that unadorned Bayesian reasoning would tell you to anyway”. For most purposes applying charity to the extent that it seeks accuracy is utterly insufficient. Actually applying these principles consistently implies the use of motivated cognition to achieve perceived pragmatic goals.
No, basically it just isn’t. The principle of charity as used here and in general is not “be charitable to the extent that unadorned Bayesian reasoning would tell you to anyway”.
I have looked into the matter by reading articles in the Stanford Encyclopedia of Philosophy, searching through google, reading journal articles by the originator and popularizers of the phrase etc. and I now know much more about this than I had.
I think a separate discussion post would be useful. When I wrote this, I was thinking of the PoC as something like an axiom that’s not explicitly built into logic, but is necessary for productive discussion because otherwise people would constantly nitpick or strawman each other, there would be no way to stop them, and so on. Based on the discussion here, though, it’s seeming more like a tool intended for social situations that’s usually suboptimal for truth-finding purposes, although again, it’s still better than always going with your initial interpretation or always going with the least logical interpretation.
Well that’s a butter-coated 10-degree incline if ever I’ve seen one. Alternatively, please elaborate on how we can’t have the former without the latter.
Summary: unreflective beliefs about others that don’t consider that the lens that saw them has its flaws, when the lens can see those flaws, is mere rationality as a ritual.
Those last three are importantly different than the first four because they don’t refer to the possible intent of a person.
As a tool to combat inferential distance, I can do better than taking my first-order best guess of what someone means, and can in addition give special consideration to the possibility that they are using words in an unusual way to say something true/valid/interesting. I can then modify my first best-guess such that it is more likely they mean the true/valid/interesting thing than I had previously thought.
It might be that different interpretations involve them saying something true and valid but uninteresting, or potentially interesting and valid but untrue, etc. That is why special attention should be paid to applying the principle of charity multiple times. One reading might have them only committing fallacy A, another fallacy B, and it would be mistaken to modify my first-order guess of what they intended based on my idiosyncratic distaste for a particular fallacy instead of doing my best to model their likely beliefs,preferences, etc.
There is no reason to believe my interpretation of what others mean is much clouded by a wrongful bias disproportionately disbelieving that the “Holy Scripture makes true statements”. Inferential distance and motivated desire to win the argument far more might see me wrongly misinterpreting someone, being wrong about the facts in the world that relate to them, than being wrong about random facts of the world. To the extent I am wrong about facts in the world, I expect to mislead myself only slightly, and this would only be noticeable for questions the truths of which are unclear.
The principle of charity is an interpretive framework, not a method for evaluating truth. I can read an argument charitably and still think it is wrong. In other words, the principle of charity can be paraphrased as “Assume your debater is not being Logically Rude.” DH7 arguments help ensure that your discussions are increasing the truth of your beliefs, and they require reading your opponent generously.
I did many hours of reading about this yesterday. I recommend holding off on arguing about it or spending time researching it until I have (or haven’t) posted on this topic in the near future.
Great post!
Both the principle of charity and the least convenient possible world principle often do not point to just one argument.
Wikipedia says:
I disagree with “charity becomes a matter of taste,” and think instead that one should always apply each version.
I must disagree. Sometimes it really is more important to maintain an accurate model of reality. Enabling the other’s bullshit is not always virtuous or useful.
The principle of charity is a counter-bias for one’s (estimated amount of) motivated cognition, underestimation of inferential distance, typical mind fallacy, etc. This is towards an accurate model of reality about the person one is speaking with.
It is superseded as a way to find truth about the subject in contention, rather than about the other person, by LCPW.
It is also a guide for responding, which doesn’t mean you have to believe anything in particular, just as one can always guess “red” for the color of the next card if most are red and some are blue. For the social utility of it, sure sometimes it’s not always useful, but always is pretty extreme. It’s not always useful to be honest, or to not fake a seizure in a debate either.
No, basically it just isn’t. The principle of charity as used here and in general is not “be charitable to the extent that unadorned Bayesian reasoning would tell you to anyway”. For most purposes applying charity to the extent that it seeks accuracy is utterly insufficient. Actually applying these principles consistently implies the use of motivated cognition to achieve perceived pragmatic goals.
I have looked into the matter by reading articles in the Stanford Encyclopedia of Philosophy, searching through google, reading journal articles by the originator and popularizers of the phrase etc. and I now know much more about this than I had.
It is probably worth a separate discussion post.
If you are willing to put some detail in it would be worth a main post too.
I think a separate discussion post would be useful. When I wrote this, I was thinking of the PoC as something like an axiom that’s not explicitly built into logic, but is necessary for productive discussion because otherwise people would constantly nitpick or strawman each other, there would be no way to stop them, and so on. Based on the discussion here, though, it’s seeming more like a tool intended for social situations that’s usually suboptimal for truth-finding purposes, although again, it’s still better than always going with your initial interpretation or always going with the least logical interpretation.
There are some items missing from the list, such as:
Motivated cognition is your enemy. Don’t invite it to feast on your mind.
Well that’s a butter-coated 10-degree incline if ever I’ve seen one. Alternatively, please elaborate on how we can’t have the former without the latter.
Summary: unreflective beliefs about others that don’t consider that the lens that saw them has its flaws, when the lens can see those flaws, is mere rationality as a ritual.
Those last three are importantly different than the first four because they don’t refer to the possible intent of a person.
As a tool to combat inferential distance, I can do better than taking my first-order best guess of what someone means, and can in addition give special consideration to the possibility that they are using words in an unusual way to say something true/valid/interesting. I can then modify my first best-guess such that it is more likely they mean the true/valid/interesting thing than I had previously thought.
It might be that different interpretations involve them saying something true and valid but uninteresting, or potentially interesting and valid but untrue, etc. That is why special attention should be paid to applying the principle of charity multiple times. One reading might have them only committing fallacy A, another fallacy B, and it would be mistaken to modify my first-order guess of what they intended based on my idiosyncratic distaste for a particular fallacy instead of doing my best to model their likely beliefs,preferences, etc.
There is no reason to believe my interpretation of what others mean is much clouded by a wrongful bias disproportionately disbelieving that the “Holy Scripture makes true statements”. Inferential distance and motivated desire to win the argument far more might see me wrongly misinterpreting someone, being wrong about the facts in the world that relate to them, than being wrong about random facts of the world. To the extent I am wrong about facts in the world, I expect to mislead myself only slightly, and this would only be noticeable for questions the truths of which are unclear.
The principle of charity is an interpretive framework, not a method for evaluating truth. I can read an argument charitably and still think it is wrong. In other words, the principle of charity can be paraphrased as “Assume your debater is not being Logically Rude.” DH7 arguments help ensure that your discussions are increasing the truth of your beliefs, and they require reading your opponent generously.
I did many hours of reading about this yesterday. I recommend holding off on arguing about it or spending time researching it until I have (or haven’t) posted on this topic in the near future.
False, see this comment.