If we’re succeeding in making ourselves rational, this—one would think—would lead to a political convergence.
Politics includes much which is a matter of preference, not just accurate beliefs about the world. For example “I like it when I get more money when X is done” is the core of many political issues. Perhaps more importantly, different preferences with respect to aggregation of human experiences can lead to genuine disagreement about political policy even among altruists. For example, an altruist who had values similar to those that Robin Hanson blogs about will inevitably have a political disagreement with me no matter how rational we both are.
Political beliefs should converge. And if that happens, whatever differences remain won’t be resolved by discussion, because there’s nothing left to discuss.
Indeed, but the trouble is of course that often the optimal strategy for promoting one’s preferences is to convince people that opposing them is somehow objectively wrong and delusional, rather than a matter of a fundamental clash of power and interest. (Which of course typically involves convincing oneself too, since humans tend to be bad at lying and good at sniffing out liars, and they appreciate sincerity a lot.)
That said, one of the main reasons why I find discussions on LW interesting is the unusually high ability of many participants to analyze issues in this regard, i.e. to separate correctly the factual from the normative and preferential. The bad examples where people fail to do so and the discourse breaks down tend to stick out unpleasantly, but overall, I’d say the situation is not at all bad, certainly by any realistic standards for human discourse in general.
Politics includes much which is a matter of preference, not just accurate beliefs about the world. For example “I like it when I get more money when X is done” is the core of many political issues. Perhaps more importantly, different preferences with respect to aggregation of human experiences can lead to genuine disagreement about political policy even among altruists. For example, an altruist who had values similar to those that Robin Hanson blogs about will inevitably have a political disagreement with me no matter how rational we both are.
Political beliefs should converge. And if that happens, whatever differences remain won’t be resolved by discussion, because there’s nothing left to discuss.
If we can distinguish between preference and accuracy claims, that would be quite a large step towards rationality.
Indeed, but the trouble is of course that often the optimal strategy for promoting one’s preferences is to convince people that opposing them is somehow objectively wrong and delusional, rather than a matter of a fundamental clash of power and interest. (Which of course typically involves convincing oneself too, since humans tend to be bad at lying and good at sniffing out liars, and they appreciate sincerity a lot.)
That said, one of the main reasons why I find discussions on LW interesting is the unusually high ability of many participants to analyze issues in this regard, i.e. to separate correctly the factual from the normative and preferential. The bad examples where people fail to do so and the discourse breaks down tend to stick out unpleasantly, but overall, I’d say the situation is not at all bad, certainly by any realistic standards for human discourse in general.