Virtue Ethics for Consequentialists

Meta: In­fluenced by a cool blog post by Kaj, which was in­fluenced by a cool Michael Vas­sar (like pretty much ev­ery­thing else; the man sure has a lot of ideas). The name of this post is in­tended to be taken slightly more liter­ally than the similarly ti­tled Deon­tol­ogy for Con­se­quen­tial­ists.

There’s been a hip new trend go­ing around the Sin­gu­lar­ity In­sti­tute Visit­ing Fel­lows house lately, and it’s not post­mod­ernism. It’s virtue ethics. “What, virtue ethics?! Are you se­ri­ous?” Yup. I’m so con­trar­ian I think cry­on­ics isn’t ob­vi­ous and that virtue ethics is bet­ter than con­se­quen­tial­ism. This post will ex­plain why.

When I first heard about virtue ethics I as­sumed it was a clever way for peo­ple to jus­tify things they did when the con­se­quences were bad and the rea­sons were bad, too. Peo­ple are very good at spin­ning tales about how vir­tu­ous they are, even more so than at find­ing good rea­sons that they could have done things that turned out un­pop­u­lar, and it’s hard to spin the con­se­quences of your ac­tions as good when ev­ery­one is keep­ing score. But it seems that moral the­o­rists were mostly think­ing in far mode and didn’t have too much in­cen­tive to cre­ate a moral the­ory that benefited them the most, so my Han­so­nian hy­poth­e­sis falls flat. Why did Plato and Aris­to­tle and ev­ery­one up un­til the En­light­en­ment find virtue ethics ap­peal­ing, then? Well...

Mo­ral philos­o­phy was de­signed for hu­mans, not for ra­tio­nal agents. When you’re used to think­ing about ar­tifi­cial in­tel­li­gence, eco­nomics, and de­ci­sion the­ory, it gets easy to for­get that we’re hy­per­bolic dis­coun­ters: not any­thing re­sem­bling sane. Hu­mans are not in­her­ently ex­pected util­ity max­i­miz­ers, they’re bounded agents with lit­tle ca­pac­ity for re­flec­tion. Utility func­tions are great and all, but in the words of Zack M. Davis, “Hu­mans don’t have util­ity func­tions.” Similarly, Kaj warns us: “be ex­tra care­ful when you try to ap­ply the con­cept of a util­ity func­tion to hu­man be­ings.” Back in the day no­body thought smarter-than-hu­man in­tel­li­gence was pos­si­ble, and many still don’t. Philoso­phers came up with ways for peo­ple to live their lives, have a good time, be re­spected, and do good things; they weren’t even try­ing to cre­ate morals for any­one too far out­side the norm of what­ever so­ciety they in­hab­ited at the time, or what­ever so­ciety they imag­ined to be perfect. I per­son­ally think that the Bud­dha had some re­ally in­ter­est­ing things to say and that his ideas about ethics are no ex­cep­tion (though I sus­pect he may have had pain asym­bo­lia, which to­tally de­serves its own post soon). Epicu­rus, Mill, and Ben­tham were great thinkers and all, but it’s not ob­vi­ous that what they were say­ing is best prac­tice for in­di­vi­d­ual peo­ple, even if their ideas about policy are strictly su­pe­rior to al­ter­na­tive op­tions. Virtue ethics is good for bounded agents: you don’t have to waste mem­ory on what a per­son­al­ized rule­book says about differ­ent kinds of milk, and you don’t have to think 15 in­fer­en­tial steps ahead to de­ter­mine if you should drink skim or whole.

You can be a virtue ethi­cist whose virtue is to do the con­se­quen­tial­ist thing to do (be­cause your de­on­tolog­i­cal morals say that’s what is right). Con­se­quen­tial­ists, de­on­tol­o­gists, and virtue ethi­cists don’t re­ally dis­agree on any ma­jor points in day to day life, just in crazy situ­a­tions like trol­ley prob­lems. And any­way, they’re all ac­tu­ally virtue ethi­cists: they’re try­ing to do the ‘con­se­quen­tial­ist’ or ‘de­on­tol­o­gist’ things to do, which hap­pen to usu­ally be the same. Ali­corn’s de­cided to do her best to re­duce ex­is­ten­tial risk, and I, be­ing a pseudo-con­se­quen­tial­ist, have also de­cided to do my best to re­duce ex­is­ten­tial risk. Virtue ethi­cists can do these things too, but they can also abuse the con­sis­tency effects such ac­tions in­vari­ably come with. If you’re a virtue ethi­cist it’s eas­ier to say “I’m the type of per­son who will re­ply to all of the emails in my in­box and sort them into my GTD sys­tem, be­cause or­ga­ni­za­tion and con­tentious­ness are virtues” and use this as a way to mo­ti­vate your­self. So go ahead and be a virtue ethi­cist for the con­se­quences (...or a con­se­quen­tial­ist be­cause it’s de­on­tic). It’s not ille­gal!

Re­tooled virtue ethics is bet­ter for your in­stru­men­tal ra­tio­nal­ity. The Hap­piness Hy­poth­e­sis cri­tiqued the way Western ethics, both in the de­on­tol­o­gist tra­di­tion started by Im­manuel Kant and the con­se­quen­tial­ist tra­di­tion started by Jeremy Ben­tham have been be­com­ing in­creas­ingly rea­son-based:

The philoso­pher Ed­mund Pin­coffs has ar­gued that con­se­quen­tial­ists and de­on­tol­o­gists worked to­gether to con­vince Western­ers in the twen­tieth cen­tury that moral­ity is the study of moral quan­daries and dilem­mas. Where the Greeks fo­cused on the char­ac­ter of a per­son and asked what kind of per­son we should each aim to be­come, mod­ern ethics fo­cuses on ac­tions, ask­ing when a par­tic­u­lar de­ci­sion is right or wrong. Philoso­phers wres­tle with life-and-death dilem­mas: Kill one to save five? Allow aborted fe­tuses to be used as a source of stem cells? [...] This turn from char­ac­ter ethics to quandary ethics has turned moral ed­u­ca­tion away from virtues and to­wards moral rea­son­ing. If moral­ity is about dilem­mas, then moral ed­u­ca­tion is train­ing in prob­lem solv­ing. Chil­dren must be taught how to think about moral prob­lems, es­pe­cially how to over­come their nat­u­ral ego­ism and take into their calcu­la­tions the needs of oth­ers.

[...] I be­lieve that this turn from char­ac­ter to quandary was a profound mis­take, for two rea­sons. First, it weak­ens moral­ity and limits its scope. Where the an­cients saw virtue and char­ac­ter at work in ev­ery­thing a per­son does, our mod­ern con­cep­tion con­fines moral­ity to a set of situ­a­tions that arise for each per­son only a few times in any given week [...] The sec­ond prob­lem with the turn to moral rea­son­ing is that it re­lies on bad psy­chol­ogy. Many moral ed­u­ca­tion efforts since the 1970s take the rider off the elephant and train him to solve prob­lems on his own. After be­ing ex­posed to hours of case stud­ies, class­room dis­cus­sions about moral dilem­mas, and videos about peo­ple who faced dilem­mas and made the right choices, the child learns how (not what) to think. Then class ends, the rider gets back on the elephant, and noth­ing changes at re­cess. Try­ing to make chil­dren be­have eth­i­cally by teach­ing them to rea­son well is like try­ing to make a dog happy by wag­ging its tail. It gets causal­ity back­wards.

To quote Kaj’s re­sponse to the above:

Read­ing this chap­ter, that cri­tique and the de­scrip­tion of how peo­ple like Ben­jamin Fran­klin made it into an ex­plicit pro­ject to cul­ti­vate their var­i­ous virtues one at a time, I could feel a very pe­cu­liar trans­for­ma­tion take place within me. The best way I can de­scribe it is that it felt like a part of my de­ci­sion-mak­ing or world-eval­u­at­ing ma­chin­ery sep­a­rated it­self from the rest and set­tled into a new area of re­spon­si­bil­ity that I had pre­vi­ously not rec­og­nized as a sep­a­rate one. While I had pre­vi­ously been pri­mar­ily a con­se­quen­tial­ist, that newly-spe­cial­ized part de­clared its alle­giance to virtue ethics, even though the rest of the ma­chin­ery re­mained con­se­quen­tial­ist. [...]

What has this meant in prac­tice? Well, I’m not quite sure of the long-term effects yet, but I think that my emo­tional ma­chin­ery kind of sep­a­rated from my gen­eral de­ci­sion-mak­ing and plan­ning ma­chin­ery. Think of “emo­tional ma­chin­ery” as a sys­tem that takes var­i­ous sorts of in­for­ma­tion as in­put and pro­duces differ­ent emo­tional states as out­put. Op­ti­mally, your emo­tional ma­chin­ery should at­tempt to cre­ate emo­tions that push you to­wards tak­ing the kinds of ac­tions that are most ap­pro­pri­ate given your goals. Pre­vi­ously I was sort of em­bed­ded in the world and the emo­tional sys­tem was tak­ing its in­put from the en­tire whole: the way I was, the way the world was, and the way that those were in­ter­twined. It was si­mul­ta­neously try­ing to op­ti­mize for all three, with mixed re­sults.

But now, my self-model was set sep­a­rate from the world-model, and my emo­tional ma­chin­ery started run­ning its eval­u­a­tions pri­mar­ily based on the self-model. The main ques­tions be­came “how could I de­velop my­self”, “how could I be more vir­tu­ous” and “how could I best act to im­prove the world”. From the last bit, you can see that I haven’t lost the con­se­quen­tial­ist layer in my de­ci­sion-mak­ing: I am still try­ing to act in ways that im­prove the world. But now it’s more like my emo­tional sys­tems are tak­ing in­put from the con­se­quen­tial­ist plan­ning sys­tem to figure out what virtues to con­cen­trate on, in­stead of the con­se­quen­tial­ist rea­son­ing be­ing com­pletely in­ter­twined with my emo­tional sys­tems.

Ap­ply­ing both con­se­quen­tial­ist and virtue ethi­cist lay­ers to the way you ac­tu­ally get things done in the real world seems to me a great idea. It rec­og­nizes that most of us don’t ac­tu­ally have that much con­trol over what we do. Ac­knowl­edg­ing this and deal­ing with its con­se­quences, and what it says about us, al­lows us to do the things we want and feel good about it at the same time.

So, if you’d like, try to be a virtue ethi­cist for a week. If a key of epistemic ra­tio­nal­ity is hav­ing your be­liefs pay rent in ex­pected an­ti­ci­pa­tion, then in­stru­men­tal ra­tio­nal­ity is about hav­ing your ac­tions pay rent in ex­pected util­ity. Use sci­ence! If be­ing a virtue ethi­cist helps even one per­son be more the per­son they want to be, like it did for Kaj, then this post was well worth the time spent.