I agree with most of the points in this article, but yet it underestimates a fundamental difference between two ways of disagreeing.
Take the typical political debate about “raising taxes on the wealthy to give social help to the poor” vs “giving tax cuts and reducing social help”. People can disagree on that topic for two completely different set of reasons.
People can disagree on that topic because, even if they share a more-or-less common utility function, in which having people dying of cold in the street is valued very negatively, they have different expectations about what each policy will do. Some will say that raising taxes on the wealthy and giving the money to the poor will improve the living conditions of the poor, without hurting much the wealthy, and will be good for the economy since it’ll increase the demand in construction/good factory/… which is the true motor of economy. Some will say that raising the taxes on the wealthy and giving the money to the poor will lower the incentive for the rich to invest in the economy, and for the poor to find themselves a job, and will at the end damage the whole economy and makes everyone poorer on the long run. I’ve my own opinion on that (and I may very well be reflected by the way I formulated the two hypothesis, but that’s really not my goal here, so please accept my apologizes if it’s too obvious). That disagreement is not easy to settle (or it would have been settled since long), but can be argued rationally, looking at history, at different countries, making prediction on what will happen when a country changes it’s welfare/tax policy and checking the prediction later on, … Labeling “evil” someone who share a similar utility function but disagree on the ways to maximize it is indeed a catastrophic mistake.
But people can also disagree on that topic because they have different utility functions. Some do think that homeless are people who deserve to be homeless because they were too lazy and stupid, and give a positive value to the fact that lazy/stupid people are punished. Some others do not include any significant term in their utility function for the homeless, considering they are only a tiny part of the population and not worth considering. Some others do think that having rich people is in itself unfair, and that taxing them highly, even if it doesn’t reduce poverty, increases fairness and is therefore valued positively. Disagreements between people having different utility functions like that will be much, much harder to settle. And labeling “evil” someone who has a broadly different utility function than yourself is much more understandable. If we don’t for that, well, for what is the word “evil” ?
And it gets even more complicated, because you can have people who claim they share your utility function, while in fact they don’t. And you don’t always even know for sure if they are or not. And because it’s often a bit of both—people who favor high taxes/social help and those who favor tax cuts/no social help usually have both different pondering in their utility function and different expected outcomes for the two policies.
You pose a number of excellent questions in your comment that one might ask about a political opponent. So then the next step is: how does one go about answering those questions? How do you figure out whether one’s opponent has different terminal values, or different instrumental values, or both?
One part of the answer lies into understanding the nature of the debate. Basically, I consider there are two kinds of debates :
A debate between two people, who trust each other (at least to a point, like friends or family members) and without witness. In that case, the whole point of the debate is trying to discover the truth, hoping at the end the two will agree (one conceding he was wrong, finding a common ground in the middle, or finding a third option unthought at the beginning), and it’s quite easy to ask the other about his real goal (immediate or distance) and to assume he’ll be honest about it.
A debate between two people, but who perfectly know they won’t convince each other, but they try to convince witness of the debate. That’s a typical debate between political candidates in an election. In that kind of debate, you’ve to be very doubtful about the claimed values (terminal or instrumental) of everyone involved. That kind of debate is very hard to handle in a non-mindkilling way.
Most real life debates are somewhere in between those two archetypes.
So I make that double distinction : between disagreement in expectation and disagreement in utility function, and between debating in order to get closer to the truth, and debating in order to convince third parties. I don’t have a magical solution to find out for sure in which case we are, but I’ll be glad to hear some tips/hindsight on the topic.
Another way to state it : to me a political debate in front of witnesses is very like a kind of prisoner dilemma. You can cooperate, by being true on your terminal and instrumental values, being honest, pointing to the flaws of your own side when you see them, … Or you can defect, by hiding your true values, hiding facts, avoiding your weak points, and even lying on facts.
If both cooperate, the debate will go smoothly, and is likely to end up in everyone being closer to the truth than when you started. If both defect, the debate will get dirty, the two debaters will end up more convinced of their own view than initially, but the witnesses will still, on average, be closer to the truth, because I do believe that it’s easier to defend something “true” than something “false”. But if one cooperates and the other defects, then the one who defects is very likely to convince the witnesses, regardless of him being right or wrong.
So for myself I tend to use a “tit-for-tat with initial cooperation and forgiving”, like I do on anything that I identify as an iterated prisoner dilemma : I cooperate initially, if I get the feeling the other is defecting, I’ll resort to defecting (but I’ll never go as far as openly lying, that’s against my ethical code of conduct), but still try to fallback to cooperate every now and then and see if the other cooperates then or not.
I think that debates of the first type are probably even rarer than you estimate; even when two people who trust each other, and who both have a deliberate intent to seek the truth, are arguing alone, political instincts and biases kick in pretty hard.
I do really like your overall strategy; I’ll try to remind myself more often in the future to occasionally turn down my politics-face a bit to see if the other is willing to return to a more cooperative state.
I agree with most of the points in this article, but yet it underestimates a fundamental difference between two ways of disagreeing.
Take the typical political debate about “raising taxes on the wealthy to give social help to the poor” vs “giving tax cuts and reducing social help”. People can disagree on that topic for two completely different set of reasons.
People can disagree on that topic because, even if they share a more-or-less common utility function, in which having people dying of cold in the street is valued very negatively, they have different expectations about what each policy will do. Some will say that raising taxes on the wealthy and giving the money to the poor will improve the living conditions of the poor, without hurting much the wealthy, and will be good for the economy since it’ll increase the demand in construction/good factory/… which is the true motor of economy. Some will say that raising the taxes on the wealthy and giving the money to the poor will lower the incentive for the rich to invest in the economy, and for the poor to find themselves a job, and will at the end damage the whole economy and makes everyone poorer on the long run. I’ve my own opinion on that (and I may very well be reflected by the way I formulated the two hypothesis, but that’s really not my goal here, so please accept my apologizes if it’s too obvious). That disagreement is not easy to settle (or it would have been settled since long), but can be argued rationally, looking at history, at different countries, making prediction on what will happen when a country changes it’s welfare/tax policy and checking the prediction later on, … Labeling “evil” someone who share a similar utility function but disagree on the ways to maximize it is indeed a catastrophic mistake.
But people can also disagree on that topic because they have different utility functions. Some do think that homeless are people who deserve to be homeless because they were too lazy and stupid, and give a positive value to the fact that lazy/stupid people are punished. Some others do not include any significant term in their utility function for the homeless, considering they are only a tiny part of the population and not worth considering. Some others do think that having rich people is in itself unfair, and that taxing them highly, even if it doesn’t reduce poverty, increases fairness and is therefore valued positively. Disagreements between people having different utility functions like that will be much, much harder to settle. And labeling “evil” someone who has a broadly different utility function than yourself is much more understandable. If we don’t for that, well, for what is the word “evil” ?
And it gets even more complicated, because you can have people who claim they share your utility function, while in fact they don’t. And you don’t always even know for sure if they are or not. And because it’s often a bit of both—people who favor high taxes/social help and those who favor tax cuts/no social help usually have both different pondering in their utility function and different expected outcomes for the two policies.
You pose a number of excellent questions in your comment that one might ask about a political opponent. So then the next step is: how does one go about answering those questions? How do you figure out whether one’s opponent has different terminal values, or different instrumental values, or both?
That’s a very difficult question.
One part of the answer lies into understanding the nature of the debate. Basically, I consider there are two kinds of debates :
A debate between two people, who trust each other (at least to a point, like friends or family members) and without witness. In that case, the whole point of the debate is trying to discover the truth, hoping at the end the two will agree (one conceding he was wrong, finding a common ground in the middle, or finding a third option unthought at the beginning), and it’s quite easy to ask the other about his real goal (immediate or distance) and to assume he’ll be honest about it.
A debate between two people, but who perfectly know they won’t convince each other, but they try to convince witness of the debate. That’s a typical debate between political candidates in an election. In that kind of debate, you’ve to be very doubtful about the claimed values (terminal or instrumental) of everyone involved. That kind of debate is very hard to handle in a non-mindkilling way.
Most real life debates are somewhere in between those two archetypes.
So I make that double distinction : between disagreement in expectation and disagreement in utility function, and between debating in order to get closer to the truth, and debating in order to convince third parties. I don’t have a magical solution to find out for sure in which case we are, but I’ll be glad to hear some tips/hindsight on the topic.
Another way to state it : to me a political debate in front of witnesses is very like a kind of prisoner dilemma. You can cooperate, by being true on your terminal and instrumental values, being honest, pointing to the flaws of your own side when you see them, … Or you can defect, by hiding your true values, hiding facts, avoiding your weak points, and even lying on facts.
If both cooperate, the debate will go smoothly, and is likely to end up in everyone being closer to the truth than when you started. If both defect, the debate will get dirty, the two debaters will end up more convinced of their own view than initially, but the witnesses will still, on average, be closer to the truth, because I do believe that it’s easier to defend something “true” than something “false”. But if one cooperates and the other defects, then the one who defects is very likely to convince the witnesses, regardless of him being right or wrong.
So for myself I tend to use a “tit-for-tat with initial cooperation and forgiving”, like I do on anything that I identify as an iterated prisoner dilemma : I cooperate initially, if I get the feeling the other is defecting, I’ll resort to defecting (but I’ll never go as far as openly lying, that’s against my ethical code of conduct), but still try to fallback to cooperate every now and then and see if the other cooperates then or not.
I think that debates of the first type are probably even rarer than you estimate; even when two people who trust each other, and who both have a deliberate intent to seek the truth, are arguing alone, political instincts and biases kick in pretty hard.
I do really like your overall strategy; I’ll try to remind myself more often in the future to occasionally turn down my politics-face a bit to see if the other is willing to return to a more cooperative state.