Rational lies

If I were sitting opposite a psychopath who had a particular sensitivity about ants, and I knew that if I told him that ants have six legs then he would jump up and start killing the surrounding people, then it would be difficult to justify telling him my wonderful fact about ants, regardless of whether I believe that ants really have six legs or not.

Or suppose I knew my friend’s wife was cheating on him, but I also knew that he was terminally ill and would die within the next few weeks. The question of whether or not to inform him of my knowledge is genuinely complex, and the truth or falsity of my knowledge about his wife is only one factor in the answer. Different people may disagree about the correct course of action, but no-one would claim that the only relevant fact is the truth of the statement that his wife is cheating on him.

This is all a standard result of expected utility maximization, of course. Vocalizing or otherwise communicating a belief is itself an action, and just like any other action it has a set of possible outcomes, to which we assign probabilities as well as some utility within our value coordinates. We then average out the utilities over the possible outcomes for each action, weighted by the probability that they will actually happen, and choose the action that maximizes this expected utility. Well, that’s the gist of the situation, anyway. Much has been written on this site about the implications of expected utility maximization under more exotic conditions such as mind splitting and merging, but I’m going to be talking about more mundane situations, and the point I want to make is that beliefs are very different objects from the act of communicating those beliefs.

This distinction is particularly easy to miss as the line between belief and communication becomes subtler. Suppose that a friend of mine has built a wing suit and is about to jump off the empire state building with the belief that he will fly gracefully through the sky. Since I care about my friend’s well-being I try to explain to him the concepts of gravity and aerodynamics, and the effect it will have on him if he launches himself from the building. Examining my decision in detail, I have placed a high probability on his death if he jumps off the building, and calculated that, since I value his well-being, my expected utility would not be maximized by him making the leap.

But now suppose that my friend is particularly dull and unable or unwilling to grasp the concept of aerodynamics, and is hence unswayed by my argument. Having reasonably explained my beliefs to him, am I absolved of the moral responsibility to save him? Not from a utilitarian standpoint, since there are other courses of action available to me. I could, for example, tell him that his wing suit has been sabotaged by aliens—a line of reasoning that I happen to know he’ll believe given his predisposition towards X files-esque conspiracy theories.

Would doing so be contrary to my committed rationalist stance? Not at all; I have rationally analysed the options available to me and rationally chosen a course of action. The conditions for the communication of a belief to be deemed rational are exactly the same decision theoretic conditions applicable to any other action: namely that of being the expected utility maximizer.

If this all sounds too close to “tell people what they need to hear” then let’s ask under what specific conditions it might be rational to lie. Clearly this depends on your values. If your utility function places high value on people falling to their death then you will tend to lie about gravity and aerodynamics as much as possible. However, for the purpose of practical rationality I’m going to assume for the rest of this article that some of your basic values align with my own, such as the value of fulfilled human existence, and so on.

Convincing somebody of a falsehood will, on average, lead to them making poorer decisions according to their values. My soon-to-be-airborne friend may be convinced not to leap from the building immediately, but may shortly return with a wing suit covered in protective aluminium foil to ward off those nasty interfering aliens. Nobody is exempt from the laws of rationality. To the extent that their values align with mine, convincing another of a falsehood will have at least this one negative consequence with respect to my own values. The examples I gave above are specific situations in which other factors dominate my desire for another person to be better informed during the pursuit of their goals, but such situations are the exception rather than the rule. All other things equal, lying to an agent with similar values to mine is a bad decision.

Had I actually convinced my friend of the nature of gravity and aerodynamics rather than spinning a story about aliens then next time he may return to the rooftop with a parachute rather than a tin foil wing suit. In the example I gave, this course of action was unlikely to succeed, but again this situation is the exception rather than the rule. In general, a true statement has the potential to improve the recipient’s brain/​universe entanglement and thereby improve his potential for achieving his goals, which, if his values align with my own, constitutes at least one factor in favour of truth-telling. All other things equal, telling the truth is a good decision.

This doesn’t mean that telling the truth is valuable only in terms of its benefits to me. My own values include bettering the lives of others, so achieving “my goals” constitutes working towards the good of others, as well as my own.

Is there any other sense in which truth-telling may be considered a “good” in its own right? Naively one might argue that the act of uttering a truth could itself be a value in its own right, but such a utility function would be maximized by a universe tiled with tape players broadcasting mundane, true facts about the universe. It would be about as well-aligned with the values of typical human being as a paper clip maximizer.

It’s a more reasonable position for rationality in others to be included among one’s fundamental values. This, I feel, is more closely aligned with my own value. All other things equal, I would like those around me to be rational. Not just to live in a society of rationalists, though this is an orthogonal value. Not just to engage in interesting, stimulating discussion, though this is also an orthogonal value. And not just for others to succeed in achieving their goals, though this, again, is an orthogonal value. But to actually maximize the brain/​universe entanglement of others, for its own sake.

Do you value rationality in others for its own sake?