Morality is not about willpower

Most people believe the way to lose weight is through willpower. My successful experience losing weight is that this is not the case. You will lose weight if you want to, meaning you effectively believe0 that the utility you will gain from losing weight, even time-discounted, will outweigh the utility from yummy food now. In LW terms, you will lose weight if your utility function tells you to. This is the basis of cognitive behavioral therapy (the effective kind of therapy), which tries to change peoples’ behavior by examining their beliefs and changing their thinking habits.

Similarly, most people believe behaving ethically is a matter of willpower; and I believe this even less. Your ethics is part of your utility function. Acting morally is, technically, a choice; but not the difficult kind that holds up a stop sign and says “Choose wisely!” We notice difficult moral choices more than easy moral choices; but most moral choices are easy, like choosing a ten dollar bill over a five. Immorality is not a continual temptation we must resist; it’s just a kind of stupidity.

This post can be summarized as:

  1. Each normal human has an instinctive personal morality.

  2. This morality consists of inputs into that human’s decision-making system. There is no need to propose separate moral and selfish decision-making systems.

  3. Acknowledging that all decisions are made by a single decision-making system, and that the moral elements enter it in the same manner as other preferences, results in many changes to how we encourage social behavior.

Many people have commented that humans don’t make decisions based on utility functions. This is a surprising attitude to find on LessWrong, given that Eliezer has often cast rationality and moral reasoning in terms of computing expected utility. It also demonstrates a misunderstanding of what utility functions are. Values, and utility functions, are models we construct to explain why we do what we do. You can construct a set of values and a utility function to fit your observed behavior, no matter how your brain produces that behavior. You can fit this model to the data arbitrarily well by adding parameters. It will always have some error, as you are running on stochastic hardware. Behavior is not a product of the utility function; the utility function is a product of (and predictor of) the behavior. If your behavior can’t be modelled with values and a utility function, you shouldn’t bother reading LessWrong, because “being less wrong” means behaving in a way that is closer to the predictions of some model of rationality. If you are a mysterious black box with inscrutable motives that makes unpredictable actions, no one can say you are “wrong” about anything.

If you still insist that I shouldn’t talk about utility functions, though—it doesn’t matter! This post is about morality, not about utility functions. I use utility functions just as a way of saying “what you want to do”. Substitute your own model of behavior. The bottom line here is that moral behavior is not a qualitatively separate type of behavior and does not require a separate model of behavior.

My view isn’t new. It derives from ancient Greek ethics, Nietzsche, Ayn Rand, B.F. Skinner, and comments on LessWrong. I thought it was the dominant view on LW, but the comments and votes indicate it is held at best by a weak majority.

Relevant EY posts include “What would you do without morality?”, “The gift we give to tomorrow”, “Changing your meta-ethics”, and “The meaning of right”; and particularly the statement, “Maybe that which you would do even if there were no morality, is your morality.” I was surprised that no comments mentioned any of the many points of contact between this post and Eliezer’s longest sequence. (Did anyone even read the entire meta-ethics sequence?) The view I’m presenting is, as far as I can tell, the same as that given in EY’s meta-ethics sequence up through “The meaning of right”1; but I am talking about what it is that people are doing when they act in a way we recognize as ethical, whereas Eliezer was talking about where people get their notions of what is ethical.

Ethics as willpower

Society’s main story is that behaving morally means constantly making tough decisions and doing things you don’t want to do. You have desires; other people have other desires; and ethics is a referee that helps us mutually satisfy these desires, or at least not kill each other. There is one true ethics; society tries to discover and encode it; and the moral choice is to follow that code.

This story has implications that usually go together:

  • Ethics is about when peoples’ desires conflict. Thus, ethics is only concerned with interpersonal relations.

  • There is a single, Platonic, correct ethical system for a given X. (X used be a social class but not a context or society. Nowadays it can be a society or context but not a social class.)

  • Your desires and feelings are anti-correlated with ethical behavior. Humans are naturally unethical. Being ethical is a continual, lifelong struggle.

  • The main purpose of ethics is to stop people from doing what they naturally want to do, so “thou shalt not” is more important than “thou shalt”.

  • The key to being ethical is having the willpower not to follow your own utility function.

  • Social ethics are encouraged by teaching people to “be good”, where “good” is the whole social ethical code. Sometimes this is done without explaining what “good” is, since it is considered obvious, or perhaps more convenient to the priesthood to leave it unspecified. (Read the Koran for an extreme example.)

  • The key contrast is between “good” people who will do the moral thing, and “evil” people who do just the opposite.

  • Turning an evil person into a good person can be done by reasoning with them, teaching them willpower, or convincing them they will be punished for being evil.

  • Ethical judgements are different from utility judgements. Utility is a tool of reason, and reason only tells you how to get what you want, whereas ethics tells you what you ought to want. Therefore utilitarians are unethical.

  • Human society requires spiritual guidance and physical force to stop people from using reason to seek their own utility.

    • Religion is necessary even if it is false.

    • Reason must be strictly subordinated to spiritual authority.

    • Smart people are less moral than dumb people, because reason maximizes personal utility.

  • Since ethics are desirable, and yet contrary to human reason, they prove that human values transcend logic, biology, and the material world, and derive from a spiritual plane of existence.

  • If there is no God, and no spiritual world, then there is no such thing as good.

    • Sartre: “There can no longer be any good a priori, since there is no infinite and perfect consciousness to think it.”

  • A person’s ethicality is a single dimension, determined by the degree to which a person has willpower and subsumes their utility to social utility. Each person has a level of ethicality that is the same in all domains. You can be a good person, an evil person, or somewhere in between—but that’s it. You should not expect someone who cheats at cards to be courageous in battle, unless they really enjoy battle.

People do choose whether to follow the ethics society promulgates. And they must weigh their personal satisfaction against the satisfaction of others; and those weights are probably relatively constant across domains for a given person. So there is some truth in the standard view. I want to point out errors; but I mostly want to change the focus. The standard view focuses on a person struggling to implement an ethical system, and obliterates distinctions between the ethics of that person, the ethics of society, and “true” ethics (whatever they may be). I will call these “personal ethics”, “social ethics”, and “normative ethics” (although the last encompasses all of the usual meaning of “ethics”, including meta-ethics). I want to increase the emphasis on personal ethics, or ethical intuitions. Mostly just to insist that they exist. (A surprising number of people simultaneously claim to have strong moral feelings, and that people naturally have no moral feelings.)

The conventional story denies these first two exist: Ethics is what is good; society tries to figure out what is good; and a person is more or less ethical to the degree that they act in accordance with ethics.

The chief error of the standard view is that it explains ethics as a war between the physical and the spiritual. If a person is struggling between doing the “selfish” thing and the “right” thing, that proves that they want both about equally. The standard view instead supposes that they have a physical nature that wants only the “selfish” thing, and some internal or external spiritual force pulling them towards the “right” thing. It thus hinders people from thinking about ethical problems as trade-offs, because the model never shows two “moral” desires in conflict except in “paradoxes” such as the trolley problem. It also prevents people from recognizing cultures as moral systems—to really tick these people off, let’s say morality-optimizing machines—in which different agents with different morals are necessary parts for the culture to work smoothly.

You could recast the standard view with the conscious mind taking the place of the spiritual nature, the subconscious mind taking the place of the physical nature, and willpower being the exertion of control over the subconscious by the conscious. (Suggested by my misinterpretation of Matt’s comment.) But to use that to defend the “ethics as willpower” view, you assume that the subconscious usually wants to do immoral things, while the conscious mind is the source of morality. And I have no evidence that my subconscious is less likely to propose moral actions than my conscious. My subconscious mind usually wants to be nice to people; and my conscious mind sometimes comes up with evil plans that my subconscious responds to with disgust.

… but being evil is harder than being good

At times, I’ve rationally convinced myself that I was being held back from my goals by my personal ethics, and I determined to act less ethically. Sometimes I succeeded. But more often, I did not. Even when I did, I had to first build up a complex structure of rationalizations, and exert a lot of willpower to carry through. I have never been able (or wanted) to say, “Now I will be evil” (by my personal ethics) and succeed.

If being good takes willpower, why does it take more willpower to be evil?

Ethics as innate

One theory that can explain why being evil is hard is Rousseau’s theory that people are noble savages by birth, and would enact the true ethics if only their inclinations were not crushed by society. But if you have friends who have raised their children by this theory, I probably need say no more. A fatal flaw in noble-savage theory is that Rousseau didn’t know about evolution. Child-rearing is part of our evolutionary environment; so we should expect to have genetically evolved instincts and culturally evolved beliefs about child-rearing which are better than random, and we should expect things to go terribly wrong if we ignore these instincts and practices.

Ethics as taste

Try, instead, something between the extremes of saying that people are naturally evil, or naturally good. Think of the intuitions underlying your personal morality as the same sort of thing as your personal taste in food, or maybe better, in art. I find a picture with harmony and balance pleasing, and I find a conversation carried on in harmony and with a balance of speakers and views pleasing. I find a story about someone overcoming adversity pleasing, as I find an instance of someone in real life overcoming adversity commendable.

Perhaps causality runs in the other direction; perhaps our artistic tastes are symbolic manifestations of our morals and other cognitive rules-of-thumb. I can think of many moral “tastes” for which I have which have no obvious artistic analog, which suggests that the former is more fundamental. I like making people smile; I don’t like pictures of smiling people.

I don’t mean to trivialize morality. I just want people to admit that most humans often find pleasure in being nice to other humans, and usually feel pain on seeing other humans—at least those within the tribe—in pain. Is this culturally conditioned? If so, it’s by culture predating any moral code on offer today. Works of literature have always shown people showing some other people an unselfish compassion. Sometimes that compassion can be explained by a social code, as with Wiglaf’s loyalty to Beowulf. Sometimes it can’t, as with Gilgamesh’s compassion for the old men who sit on the walls of Ur, or Odysseus’ compassion for Ajax.

Subjectively, we feel something different on seeing someone smile than we do on eating an ice-cream cone. But it isn’t obvious to me that “moral feels /​ selfish feels” is a natural dividing line. I feel something different when saving a small child from injury than when making someone smile, and I feel something different when drinking Jack Daniels than when eating an ice-cream cone.

Computationally, there must be little difference between the way we treat moral, aesthetic, and sensual preferences, because none of them reliably trumps the others. We seem to just sum them all up linearly. If so, this is great, to a rationalist, because then rationality and morals are no longer separate magisteria. We don’t need separate models of rational behavior and moral behavior, and a way of resolving conflicts between them. If you are using utility functions, you only need one model; values of all types go in, and a single utility comes out. (If you aren’t using utility functions, use whatever it is you use to predict human behavior. The point is that you only need one of them.) It’s true that we have separate neural systems that respond to different classes of situation; but no one has ever protested against a utility-based theory of rationality by pointing out that there are separate neural systems responding to images and sounds, and so we must have separate image-values and sound-values and some way of resolving conflicts between image-utility and sound-utility. The division of utility into moral values and all other values may even have a neural basis; but modelling that difference has, historically, caused much greater problems than it has solved.

The problem for this theory is: If ethics is just preference, why do we prefer to be nice to each other? The answer comes from evolutionary theory. Exactly how it does this is controversial, but it is no longer a deep mystery. One feasible answer is that reproductive success is proportional to inclusive fitness.3 It is important to know how much of our moral intuitions is innate, and how much is conditioned; but I have no strong opinion on this other than that it is probably some of each.

This theory has different implications than the standard story:

  • Behaving morally feels good.

  • Social morals are encouraged by creating conditions that bring personal morals into line with social morals.

  • A person can have personal morals similar to society’s in one domain, and very different in another domain.

  • A person learns their personal morals when they are young.

  • Being smarter enables you to be more ethical.

  • A person will come to feel that an action is ethical if it leads to something pleasant shortly after doing it, and unethical if it leads to displeasure.

  • A person can extinguish a moral intuition by violating it many times without consequences—whether they do this of their own free will, or under duress.

  • It may be easier to learn to enjoy new ethical behaviors (thou shalts), than to dislike enjoyable behaviors (thou shalt nots).

  • The key contrast is between “good” people who want to do the moral thing, and “bad” people who are apathetic about it.

  • Turning a (socially) bad person into a good person is done one behavior at a time.

  • Society can reason about what ethics they would like to encourage under current conditions.

As I said, this is nothing new. The standard story makes concessions to it, as social conservatives believe morals should be taught to children using behaviorist principles (“Spare the rod and spoil the child”). This is the theory of ethics endorsed by “Walden Two” and warned against by “A Clockwork Orange”. And it is the theory of ethics so badly abused by the former Soviet Union, among other tyrannical governments. More on this, hopefully, in a later post.

Does that mean I can have all the pie?

No.

Eliezer addressed something that sounds like the “ethics as taste” theory in his post “Is morality preference?”, and rejected it. However, the position he rejected was the straw-man position that acting to immediately gratify your desires is moral behavior. (The position he ultimately promoted, in “The meaning of right”, seems to be the same I am promoting here: That we have ethical intuitions because we have evolved to compute actions as preferable that maximized our inclusive fitness.)

Maximizing expected utility is not done by greedily grabbing everything within reach that has utility to you. You may rationally leave your money in a 401K for 30 years, even though you don’t know what you’re going to do with it in 30 years and you do know that you’d really like a Maserati right now. Wanting the Maserati does not make buying the Maserati rational. Similarly, wanting all of the pie does not make taking all of the pie moral.

More importantly, I would never want all of the pie. It would make me unhappy to make other people go hungry. But what about people who really do want all of the pie? I could argue that they reason that taking all the pie would incur social penalties. But that would result in morals that vanish when no one is looking. And that’s not the kind of morals normal people have.

Normal people don’t calculate the penalties they will incur from taking all the pie. Sociopaths do that. Unlike the “ethics as willpower” theorists, I am not going to construct a theory of ethics that takes sociopaths as normal.4 They are diseased, and my theory of ethical behavior does not have to explain their behavior, any more than a theory of rationality has to explain the behavior of schizophrenics. Now that we have a theory of evolution that can explain how altruism could evolve, we don’t have to come up with a theory of ethics that assumes people are not altruistic.

Why would you want to change your utility function?

Many LWers will reason like this: “I should never want to change my utility function. Therefore, I have no interest in effective means of changing my tastes or my ethics.”

Reasoning this way makes the distinction between ethics as willpower and ethics as taste less interesting. In fact, it makes the study of ethics in general less interesting—there is little motivation other than to figure out what your ethics are, and to use ethics to manipulate others into optimizing your values.

You don’t have to contemplate changing your utility function for this distinction to be somewhat interesting. We are usually talking about society collectively deciding how to change each others’ utility functions. The standard LessWrongian view is compatible with this: You assume that ethics is a social game in which you should act deceptively, trying to foist your utility functions on other people and avoid letting yours being changed.

But I think we can contemplate changing our utility functions. The short answer is that you may choose to change your future utility function when doing so will have the counter-intuitive effect of better-fulfilling your current utility function (as some humans do in one ending of Eliezer’s story about babyeating aliens). This can usually be described as a group of people all conspiring to choose utility functions that collectively solve prisoners’ dilemmas, or (as in the case just cited) as a rational response to a threatened cost that your current utility function is likely to trigger. (You might model this as a pre-commitment, like one-boxing, rather than as changing your utility function. The results should be the same. Consciously trying to change your behavior via pre-commitment, however, may be more difficult, and may be interpreted by others as deception and punished.)

(There are several longer, more frequently-applicable answers; but they require a separate post.)

Fuzzies and utilons

Eliezer’s post, Purchase fuzzies and utilons separately, on the surface appears to say that you should not try to optimize your utility function, but that you should instead satisfy two separate utility functions: a selfish utility function, and an altruistic utility function.

But remember what a utility function is. It’s a way of adding up all your different preferences and coming up with a single number. Coming up with a single number is important, so that all possible outcomes can be ordered. That’s what you need, and ordering is what numbers do. Having two utility functions is like having no utility function at all, because you don’t have an ordering of preferences.

The “selfish utility function” and the “altruistic utility function” are different natural categories of human preferences. Eliezer is getting indirectly at the fact that the altruistic utility function (which gives output in “fuzzies”) is indexical. That is, its values have the word “I” in them. The altruistic utility function cares whether you help an old lady across the street, or some person you hired in Portland helps an old lady across the street. If you aren’t aware of this, you may say, “It is more cost-effective to hire boy scouts (who work for less than minimum wage) to help old ladies across the street and achieve my goal of old ladies having been helped across the street.” But your real utility function prefers that you helped them across the street; and so this doesn’t work.

Conclusion

The old religious view of ethics as supernatural and contrary to human nature is dysfunctional and based on false assumptions. Many religious people claim that evolutionary theory leads to the destruction of ethics, by teaching us that we are “just” animals. But ironically, it is evolutionary theory that provides us with the understanding we need to build ethical societies. Now that we have this explanation, the “ethics as taste” theory deserves to be evaluated again, and see if it isn’t more sensible and more productive than the “ethics as willpower” theory.

0. I use the phrase “effectively believe” to mean both having a belief, and having habits of thought that cause you to also believe the logical consequences of that belief.

1. We have disagreements, such as the possibility of dividing values into terminal and instrumental, the relation of the values of the mind to the values of its organism, and whether having a value implies that propagating that value is also a value of yours (I say no). But they don’t come into play here.

3. For more details, see Eliezer’s meta-ethics sequence.

4. Also, I do not take Gandhi as morally normal. Not all brains develop as their genes planned; and we should expect as many humans to be pathologically good as are pathologically evil. (A biographical comparison between Gandhi and Hitler shows a remarkable number of similarities.)