On Terminal Goals and Virtue Ethics

Introduction

A few months ago, my friend said the fol­low­ing thing to me: “After see­ing Diver­gent, I fi­nally un­der­stand virtue ethics. The main char­ac­ter is a cross be­tween Aris­to­tle and you.”

That was an im­pos­si­ble-to-re­sist pitch, and I saw the movie. The thing that res­onated most with me–also the thing that my friend thought I had in com­mon with the main char­ac­ter–was the idea that you could make a par­tic­u­lar de­ci­sion, and set your­self down a par­tic­u­lar course of ac­tion, in or­der to make your­self be­come a par­tic­u­lar kind of per­son. Tris didn’t join the Dauntless cast be­cause she thought they were do­ing the most good in so­ciety, or be­cause she thought her com­par­a­tive ad­van­tage to do good lay there–she chose it be­cause they were brave, and she wasn’t, yet, and she wanted to be. Brav­ery was a virtue that she thought she ought to have. If the graph of her mo­ti­va­tions even went any deeper, the only node be­yond ‘be­come brave’ was ‘be­come good.’

(Tris did have a con­cept of some fu­ture world-out­comes be­ing bet­ter than oth­ers, and want­ing to have an effect on the world. But that wasn’t the causal rea­son why she chose Dauntless; as far as I can tell, it was un­re­lated.)

My twelve-year-old self had a similar at­ti­tude. I read a lot of fic­tion, and sto­ries had heroes, and I wanted to be like them–and that meant ac­quiring the right skills and the right traits. I knew I was ter­rible at re­act­ing un­der pres­sure–that in the case of an earth­quake or other nat­u­ral dis­aster, I would freeze up and not be use­ful at all. Be­ing good at re­act­ing un­der pres­sure was an im­por­tant trait for a hero to have. I could be sad that I didn’t have it, or I could de­cide to ac­quire it by do­ing the things that scared me over and over and over again. So that some­day, when the world tried to throw bad things at my friends and fam­ily, I’d be ready.

You could call that an awfully pas­sive way to look at things. It re­veals a deep-seated be­lief that I’m not in con­trol, that the world is big and com­pli­cated and be­yond my abil­ity to un­der­stand and pre­dict, much less steer–that I am not the lo­cus of con­trol. But this way of think­ing is an al­gorithm. It will al­most always spit out an an­swer, when oth­er­wise I might get stuck in the com­plex­ity and un­pre­dictabil­ity of try­ing to make a par­tic­u­lar out­come hap­pen.

Virtue Ethics

I find the differ­ent houses of the HPMOR uni­verse to be a very com­pel­ling metaphor. It’s not be­cause they sug­gest ac­tions to take; in­stead, they sug­gest virtues to fo­cus on, so that when a par­tic­u­lar situ­a­tion comes up, you can act ‘in char­ac­ter.’ Courage and brav­ery for Gryffin­dor, for ex­am­ple. It also sug­gests the idea that differ­ent peo­ple can fo­cus on differ­ent virtues–di­ver­sity is a use­ful thing to have in the world. (I’m prob­a­bly man­gling the con­cept of virtue ethics here, not hav­ing any back­ground in philos­o­phy, but it’s the clos­est term for the thing I mean.)

I’ve thought a lot about the virtue of loy­alty. In the past, loy­alty has kept me with jobs and friends that, from an ob­jec­tive per­spec­tive, might not seem like the op­ti­mal things to spend my time on. But the costs of quit­ting and find­ing a new job, or cut­ting off friend­ships, wouldn’t just have been about di­rect con­se­quences in the world, like need­ing to spend a bunch of time hand­ing out re­sumes or hav­ing an un­pleas­ant con­ver­sa­tion. There would also be a shift within my­self, a weak­en­ing in the drive to­wards loy­alty. It wasn’t that I thought ev­ery­one ought to be ex­tremely loyal–it’s a virtue with ob­vi­ous down­sides and failure modes. But it was a virtue that I wanted, partly be­cause it seemed un­der­val­ued.

By call­ing my­self a ‘loyal per­son’, I can aim my­self in a par­tic­u­lar di­rec­tion with­out hav­ing to un­der­stand all the sub­com­po­nents of the world. More im­por­tantly, I can make de­ci­sions even when I’m rushed, or tired, or un­der cog­ni­tive strain that makes it hard to calcu­late through all of the con­se­quences of a par­tic­u­lar ac­tion.

Ter­mi­nal Goals

The Less Wrong/​CFAR/​ra­tio­nal­ist com­mu­nity puts a lot of em­pha­sis on a differ­ent way of try­ing to be a hero–where you start from a ter­mi­nal goal, like “sav­ing the world”, and break it into sub­goals, and do what­ever it takes to ac­com­plish it. In the past I’ve thought of my­self as be­ing mostly con­se­quen­tial­ist, in terms of moral­ity, and this is a very con­se­quen­tial­ist way to think about be­ing a good per­son. And it doesn’t feel like it would work.

There are some bad rea­sons why it might feel wrong–i.e. that it feels ar­ro­gant to think you can ac­com­plish some­thing that big–but I think the main rea­son is that it feels fake. There is strong so­cial pres­sure in the CFAR/​Less Wrong com­mu­nity to claim that you have ter­mi­nal goals, that you’re work­ing to­wards some­thing big. My Sys­tem 2 un­der­stands ter­mi­nal goals and con­se­quen­tial­ism, as a thing that other peo­ple do–I could talk about my ter­mi­nal goals, and get the points, and fit in, but I’d be ly­ing about my thoughts. My model of my mind would be in­cor­rect, and that would have con­se­quences on, for ex­am­ple, whether my plans ac­tu­ally worked.

Prac­tic­ing the art of rationality

Re­cently, Anna Sala­mon brought up a ques­tion with the other CFAR staff: “What is the thing that’s wrong with your own prac­tice of the art of ra­tio­nal­ity?” The ter­mi­nal goals thing was what I thought of im­me­di­ately–namely, the con­ver­sa­tions I’ve had over the past two years, where other ra­tio­nal­ists have asked me “so what are your ter­mi­nal goals/​val­ues?” and I’ve stam­mered some­thing and then gone to hide in a cor­ner and try to come up with some.

In Ali­corn’s Lu­minos­ity, Bella says about her thoughts that “they were li­able to morph into ver­sions of them­selves that were more ideal­ized, more con­sis­tent—and not what they were origi­nally, and there­fore false. Or they’d be for­got­ten al­to­gether, which was even worse (those thoughts were mine, and I wanted them).”

I want to know true things about my­self. I also want to im­press my friends by hav­ing the traits that they think are cool, but not at the price of fak­ing it–my brain screams that pre­tend­ing to be some­thing other than what you are isn’t vir­tu­ous. When my im­me­di­ate re­sponse to some­one ask­ing me about my ter­mi­nal goals is “but brains don’t work that way!” it may not be a true state­ment about all brains, but it’s a true state­ment about my brain. My mo­ti­va­tional sys­tem is wired in a cer­tain way. I could think it was bro­ken; I could let my friends con­vince me that I needed to change, and try to shoe­horn my brain into a differ­ent shape; or I could ac­cept that it works, that I get things done and peo­ple find me use­ful to have around and this is how I am. For now. I’m not go­ing to rule out fu­ture at­tempts to hack my brain, be­cause Growth Mind­set, and maybe some other rea­sons will con­vince me that it’s im­por­tant enough, but if I do it, it’ll be on my terms. Other peo­ple are wel­come to have their ter­mi­nal goals and ex­is­ten­tial strug­gles. I’m okay the way I am–I have an al­gorithm to fol­low.

Why write this post?

It would be an awfully sur­pris­ing co­in­ci­dence if mine was the only brain that worked this way. I’m not a spe­cial snowflake. And other peo­ple who in­ter­act with the Less Wrong com­mu­nity might not deal with it the way I do. They might try to twist their brains into the ‘right’ shape, and break their mo­ti­va­tional sys­tem. Or they might de­cide that ra­tio­nal­ity is stupid and walk away.