Rational vs Reasonable

This post draws ideas from Personhood: A Game for Two or More Players on Melting Asphalt.

I’ve been lax in my attempt to write something for LW once weekly, but I hope to approximately continue nonetheless. I still have many posts planned—the next one after this will likely be a rationality game that we’ve been playing at our weekly meetups in LA.

Last time, I talked about the distinction between associated and relevant. This time I’d like to talk about another distinction which comes up in rationality-conscious communication: that of rational vs reasonable.

Rationality has to do with figuring out what you actually want, being strategic about getting it, understanding what constitutes evidence, and so on. For more information, read the entire LessWrong archive.

Reasonableness is, in contrast, a social skill. It has to do with being able to give explanations for your actions, listening to and often accepting justifications for changing those actions, playing well on a team, behaving in a reliable and predictable manner, and dealing judiciously with guilt and responsibility.

I like reasonable people. Reasonableness is very valuable. It’s probably a big part of what attracts me to rationalist circles in the first place: rationalists often value reasonableness more highly and are more careful to exercise it. Yet, rational and reasonable are two very different things. The most rational people are not the most reasonable people, or vice versa. I think it’s worth examining in some detail how these two tails come apart.

Perhaps the largest difference comes from the way our explanations for behavior differ from the actual causes. Consciously or unconsciously, we engage in lies and half-truths when it comes to giving reasons for our behavior. This tendency is difficult to overcome because our brain does not ask for permission before generating these justifications. The incentives don’t push us to total dishonesty, but they don’t push us to total honesty, either. Evolution and our everyday social feedback conspire to make us give reasonable-sounding, socially defensible reasons in place of explaining the causes for our actions to-the-best-of-our-knowledge.

Aspiring rationalists will want to do away with some or all of this. However, this comes at a cost. To be perceived as reasonable by others, you will need to produce justifications for your beliefs and actions. Hollywood rationality would have it that a good rationalist will always have a detailed, accurate explanation at the ready. However, having a best-estimate belief does not entail being able to give a reason for it, and it’s not always effective to simply explain that. Furthermore, even if we can produce explanations which are accurate as opposed to convenient, it may not be a good idea to use them. Paul Graham argues that to avoid social considerations, the quest for accurate belief is best kept personal, or shared with a few trusted friends. The idea of tell culture asserts that truth should be attempted nonetheless.

I won’t try to state here what the best way to handle this is; only that a decision must be made.

Another difference between reasonable and rational is the perspective on opinions. The Overton window provides a range of reasonable opinions. Step outside of this, and you are likely to be labelled as unreasonable. Stay within it, however, and you’re entitled to your opinion, whatever it may be. To a rationalist, you are never entitled to your opinion (not even “I don’t know”). From one perspective, the Overton window has become tiny; it consists of the single correct spread of uncertainty given evidence. (This ignores the role of priors, but in most cases it is unrealistic to claim that they play a really large role.) From another perspective, the Overton window is wide open: it’s wherever the evidence takes us.

So far, I’ve only discussed the reasonableness of personal beliefs and actions. The main use-case of reasonable behavior, though, is coordinating group action. Reasons are a currency which is exchanged for favors. If I want you to turn down your music, I can explain to you that I dislike it. I reciprocate by responding to similar requests from you. Reasonableness does not compel me to respond if your justification is missing, or lacking, however. There seems to be an intuitive scale by which we compare the size of the reason and the size of the favor. With no reason whatsoever, I may agree to small tasks, but will refuse most things. If a person’s life is in danger, almost any request for aid is seen as justifiable.

Reason appears to be a common standard applied for group coordination. Reason isn’t about the coordination norm in itself; for example, driving on the wrong side of the road as a result of forgetting what country I’m in isn’t unreasonable. Rather, if I’m told that I am driving on the wrong side of the road, and still do it, that’s unreasonable. Another example is fair allocation. Without using words, resources could be split with nonverbal signals of displeasure (and when needed, threats of violence); this calls to mind the reaction of a monkey when it sees another monkey given a larger reward for the same task. When reasons can be exchanged, however, more sophisticated coordination can occur. The group can agree to give bonuses for good behavior and withdraw resources for bad. Favors or slights can be remembered and brought up later (and we develop a self-serving bias, keeping track of all the reasons others owe us, as a strategy to game this system). These reputations can spread by word of mouth.

The use of reason becomes a matter of reputation in itself, as well. We like reasonable people, and behave favorably toward them. We dislike unreasonable people, and find ways to punish them. Accepting reasons is like cooperating in a game of societal prisoner’s dilemma; the tit-for-tat strategy will be among the common strategies, making would-be defectors wary. Many different patterns of cooperation and defection can emerge as strategies in different situations, however. This will also become intertwined with other status games, as well; a low-status person may be obliged to accept almost any reason given by higher-status individuals, while the higher-status ignore good reasons with impunity.

What constitutes a good reason will depend on group norms. Reason can be applied to these norms themselves, producing a further-refined group standard. Perhaps we can see rationality as an extremely refined standard of this kind. Reasonable people throughout time gradually built up a picture of what kinds of reasons can be given, drawing a line in the sand between logic and fallacies. Continued disagreements called for further and further refinements. Probability theory and notions of induction became necessary. Foundational problems arose as we continued to recursively ask for the reasons behind our reasons. Bayesian thought rose and fell and rose again. Now we find ourselves discussing meta-ethics and advanced decision theories.