Virtue Ethics for Consequentialists

Meta: Influenced by a cool blog post by Kaj, which was influenced by a cool Michael Vassar (like pretty much everything else; the man sure has a lot of ideas). The name of this post is intended to be taken slightly more literally than the similarly titled Deontology for Consequentialists.

There’s been a hip new trend going around the Singularity Institute Visiting Fellows house lately, and it’s not postmodernism. It’s virtue ethics. “What, virtue ethics?! Are you serious?” Yup. I’m so contrarian I think cryonics isn’t obvious and that virtue ethics is better than consequentialism. This post will explain why.

When I first heard about virtue ethics I assumed it was a clever way for people to justify things they did when the consequences were bad and the reasons were bad, too. People are very good at spinning tales about how virtuous they are, even more so than at finding good reasons that they could have done things that turned out unpopular, and it’s hard to spin the consequences of your actions as good when everyone is keeping score. But it seems that moral theorists were mostly thinking in far mode and didn’t have too much incentive to create a moral theory that benefited them the most, so my Hansonian hypothesis falls flat. Why did Plato and Aristotle and everyone up until the Enlightenment find virtue ethics appealing, then? Well...

Moral philosophy was designed for humans, not for rational agents. When you’re used to thinking about artificial intelligence, economics, and decision theory, it gets easy to forget that we’re hyperbolic discounters: not anything resembling sane. Humans are not inherently expected utility maximizers, they’re bounded agents with little capacity for reflection. Utility functions are great and all, but in the words of Zack M. Davis, “Humans don’t have utility functions.” Similarly, Kaj warns us: “be extra careful when you try to apply the concept of a utility function to human beings.” Back in the day nobody thought smarter-than-human intelligence was possible, and many still don’t. Philosophers came up with ways for people to live their lives, have a good time, be respected, and do good things; they weren’t even trying to create morals for anyone too far outside the norm of whatever society they inhabited at the time, or whatever society they imagined to be perfect. I personally think that the Buddha had some really interesting things to say and that his ideas about ethics are no exception (though I suspect he may have had pain asymbolia, which totally deserves its own post soon). Epicurus, Mill, and Bentham were great thinkers and all, but it’s not obvious that what they were saying is best practice for individual people, even if their ideas about policy are strictly superior to alternative options. Virtue ethics is good for bounded agents: you don’t have to waste memory on what a personalized rulebook says about different kinds of milk, and you don’t have to think 15 inferential steps ahead to determine if you should drink skim or whole.

You can be a virtue ethicist whose virtue is to do the consequentialist thing to do (because your deontological morals say that’s what is right). Consequentialists, deontologists, and virtue ethicists don’t really disagree on any major points in day to day life, just in crazy situations like trolley problems. And anyway, they’re all actually virtue ethicists: they’re trying to do the ‘consequentialist’ or ‘deontologist’ things to do, which happen to usually be the same. Alicorn’s decided to do her best to reduce existential risk, and I, being a pseudo-consequentialist, have also decided to do my best to reduce existential risk. Virtue ethicists can do these things too, but they can also abuse the consistency effects such actions invariably come with. If you’re a virtue ethicist it’s easier to say “I’m the type of person who will reply to all of the emails in my inbox and sort them into my GTD system, because organization and contentiousness are virtues” and use this as a way to motivate yourself. So go ahead and be a virtue ethicist for the consequences (...or a consequentialist because it’s deontic). It’s not illegal!

Retooled virtue ethics is better for your instrumental rationality. The Happiness Hypothesis critiqued the way Western ethics, both in the deontologist tradition started by Immanuel Kant and the consequentialist tradition started by Jeremy Bentham have been becoming increasingly reason-based:

The philosopher Edmund Pincoffs has argued that consequentialists and deontologists worked together to convince Westerners in the twentieth century that morality is the study of moral quandaries and dilemmas. Where the Greeks focused on the character of a person and asked what kind of person we should each aim to become, modern ethics focuses on actions, asking when a particular decision is right or wrong. Philosophers wrestle with life-and-death dilemmas: Kill one to save five? Allow aborted fetuses to be used as a source of stem cells? [...] This turn from character ethics to quandary ethics has turned moral education away from virtues and towards moral reasoning. If morality is about dilemmas, then moral education is training in problem solving. Children must be taught how to think about moral problems, especially how to overcome their natural egoism and take into their calculations the needs of others.

[...] I believe that this turn from character to quandary was a profound mistake, for two reasons. First, it weakens morality and limits its scope. Where the ancients saw virtue and character at work in everything a person does, our modern conception confines morality to a set of situations that arise for each person only a few times in any given week [...] The second problem with the turn to moral reasoning is that it relies on bad psychology. Many moral education efforts since the 1970s take the rider off the elephant and train him to solve problems on his own. After being exposed to hours of case studies, classroom discussions about moral dilemmas, and videos about people who faced dilemmas and made the right choices, the child learns how (not what) to think. Then class ends, the rider gets back on the elephant, and nothing changes at recess. Trying to make children behave ethically by teaching them to reason well is like trying to make a dog happy by wagging its tail. It gets causality backwards.

To quote Kaj’s response to the above:

Reading this chapter, that critique and the description of how people like Benjamin Franklin made it into an explicit project to cultivate their various virtues one at a time, I could feel a very peculiar transformation take place within me. The best way I can describe it is that it felt like a part of my decision-making or world-evaluating machinery separated itself from the rest and settled into a new area of responsibility that I had previously not recognized as a separate one. While I had previously been primarily a consequentialist, that newly-specialized part declared its allegiance to virtue ethics, even though the rest of the machinery remained consequentialist. [...]

What has this meant in practice? Well, I’m not quite sure of the long-term effects yet, but I think that my emotional machinery kind of separated from my general decision-making and planning machinery. Think of “emotional machinery” as a system that takes various sorts of information as input and produces different emotional states as output. Optimally, your emotional machinery should attempt to create emotions that push you towards taking the kinds of actions that are most appropriate given your goals. Previously I was sort of embedded in the world and the emotional system was taking its input from the entire whole: the way I was, the way the world was, and the way that those were intertwined. It was simultaneously trying to optimize for all three, with mixed results.

But now, my self-model was set separate from the world-model, and my emotional machinery started running its evaluations primarily based on the self-model. The main questions became “how could I develop myself”, “how could I be more virtuous” and “how could I best act to improve the world”. From the last bit, you can see that I haven’t lost the consequentialist layer in my decision-making: I am still trying to act in ways that improve the world. But now it’s more like my emotional systems are taking input from the consequentialist planning system to figure out what virtues to concentrate on, instead of the consequentialist reasoning being completely intertwined with my emotional systems.

Applying both consequentialist and virtue ethicist layers to the way you actually get things done in the real world seems to me a great idea. It recognizes that most of us don’t actually have that much control over what we do. Acknowledging this and dealing with its consequences, and what it says about us, allows us to do the things we want and feel good about it at the same time.

So, if you’d like, try to be a virtue ethicist for a week. If a key of epistemic rationality is having your beliefs pay rent in expected anticipation, then instrumental rationality is about having your actions pay rent in expected utility. Use science! If being a virtue ethicist helps even one person be more the person they want to be, like it did for Kaj, then this post was well worth the time spent.