The Trouble With “Good”

Re­lated to: How An Al­gorithm Feels From In­side, The Affect Heuris­tic, The Power of Pos­i­tivist Thinking

I am a nor­ma­tive util­i­tar­ian and a de­scrip­tive emo­tivist: I be­lieve util­i­tar­i­anism is the cor­rect way to re­solve moral prob­lems, but that the nor­mal men­tal al­gorithms for re­solv­ing moral prob­lems use emo­tivism.

Emo­tivism, aka the yay/​boo the­ory, is the be­lief that moral state­ments, how­ever offi­cial they may sound, are merely per­sonal opinions of prefer­ence or dis­like. Thus, “feed­ing the hun­gry is a moral duty” cor­re­sponds to “yay for feed­ing the hun­gry!” and “mur­der­ing kit­tens is wrong” cor­re­sponds to “boo for kit­ten mur­der­ers!“

Emo­tivism is a very nice the­ory of what peo­ple ac­tu­ally mean when they make moral state­ments. Billions of peo­ple around the world, even the non-re­li­gious, hap­pily make moral state­ments ev­ery day with­out hav­ing any idea what they re­duce to or feel­ing like they ought to re­duce to any­thing.

Emo­tivism also does a re­mark­ably good job cap­tur­ing the com­mon mean­ings of the words “good” and “bad”. An av­er­age per­son may have be­liefs like “pizza is good, but seafood is bad”, “Is­rael is good, but Pales­tine is bad”, “the book was good, but the movie was bad”, “athe­ism is good, the­ism is bad”, “evolu­tion is good, cre­ation­ism is bad”, and “dogs are good, but cats are bad”. Some of these seem to be moral be­liefs, oth­ers seem to be fac­tual be­liefs, and oth­ers seem to be per­sonal prefer­ences. But we are happy us­ing the word “good” for all of them, and it doesn’t feel like we’re us­ing the same word in sev­eral differ­ent ways, the way it does when we use “right” to mean both “cor­rect” and “op­po­site of left”. It feels like they’re all just the same thing. The moral the­ory that cap­tures that feel­ing is emo­tivism. Yay pizza, books, Is­raelis, athe­ists, dogs, and evolu­tion! Boo seafood, Pales­ti­ni­ans, movies, the­ists, cre­ation­ism, and cats!

Re­mem­ber, evolu­tion is a crazy tin­ker who re­cy­cles ev­ery­thing. So it would not be sur­pris­ing to find that our moral­ity is a quick hack on the same ma­chin­ery that runs our de­ci­sions about which food to eat or which pet to adopt. To make an out­ra­geous metaphor: our brains run a sys­tem rather like Less Wrong’s karma. You’re aller­gic to cats, so you down-vote “cats” a cou­ple of points. You hear about a Pales­ti­nian com­mit­ting a ter­ror­ist at­tack, so you down-vote “Pales­ti­ni­ans” a few points. Richard Dawk­ins just said some­thing es­pe­cially witty, so you up-vote “athe­ism”. High karma score means seek it, use it, ac­quire it, or en­dorse it. Low karma score means avoid it, ig­nore it, dis­card it, or con­demn it.1

Re­mem­ber back dur­ing the pres­i­den­tial elec­tion, when a McCain sup­porter claimed that an Obama sup­porter at­tacked her and carved a “B” on her face with a knife? This was HUGE news. All of my Repub­li­can friends started emailing me and say­ing “Hey, did you hear about this, this proves we’ve been right all along!” And all my Demo­cratic friends were grum­bling and say­ing how it was prob­a­bly made up and how we should all just for­get the whole thing.

And then it turned out it WAS all made up, and the McCain sup­porter had faked the whole af­fair. And now all of my Demo­crat friends started emailing me and say­ing “Hey, did you hear about this, it shows what those Repub­li­cans and McCain sup­port­ers are REALLY like!” and so on, and the Repub­li­cans were try­ing to bury it as quickly as pos­si­ble.

The over­whelm­ingly in­ter­est­ing thing I no­ticed here was that ev­ery­one seemed to ac­cept—not ex­plic­itly, but im­plic­itly very much—that an Obama sup­porter act­ing vi­o­lently was in some sense ev­i­dence against Obama or jus­tifi­ca­tion for op­po­si­tion to Obama; or, that a McCain sup­porter act­ing dishon­estly was in some sense ev­i­dence against McCain or con­fir­ma­tion that Obama sup­port­ers were bet­ter peo­ple. To a Bayesian, this would be balder­dash. But to an emo­tivist, where any bad feel­ings as­so­ci­ated with Obama count against him, it sort of makes sense. All those peo­ple emailing me about this were say­ing: Look, here is some­thing nega­tive as­so­ci­ated with Obama; down­vote him!2

So this is one prob­lem: the in­puts to our men­tal karma sys­tem aren’t always closely re­lated to the real merit of a per­son/​thing/​idea.

Another prob­lem: our in­ter­pre­ta­tion of whether to up­vote or down­vote some­thing de­pends on how many up­votes or down­votes it already has. Here on Less Wrong we call this an in­for­ma­tion cas­cade. In the mind, we call it an Affec­tive Death Spiral.

Another prob­lem: we are tempted to as­sign ev­ery­thing about a con­cept the same score. Eliezer Yud­kowsky cur­rently has 2486 karma. How good is Eliezer at philos­o­phy? Ap­par­ently some­where around the level it would take to get 2486 karma. How much does he know about eco­nomics? Some­where around level 2486 would be my guess. How well does he write? Prob­a­bly well enough to get 2486 karma. Trans­lated into men­tal terms, this looks like the Halo Effect. Yes, we can pick apart our analy­ses in greater de­tail; hav­ing read Eliezer’s posts, I know he’s bet­ter at some things than oth­ers. But that 2486 num­ber is go­ing to cause an­chor­ing-and-ad­just­ment is­sues even so.

But the big prob­lem, the world-break­ing prob­lem, is that stick­ing ev­ery­thing good and bad about some­thing into one big bin and mak­ing de­ci­sions based on whether it’s a net pos­i­tive or a net nega­tive is an un­sub­tle, leaky heuris­tic com­pletely un­suit­able for com­pli­cated prob­lems.

Take gun con­trol. Are guns good or bad? My gut-level emo­tivist re­sponse is: bad. They’re loud and scary and dan­ger­ous and they shoot peo­ple and of­ten kill them. It is very tempt­ing to say: guns are bad, there­fore we should have fewer of them, there­fore gun con­trol. I’m not say­ing gun con­trol is there­fore wrong: re­versed stu­pidity is not in­tel­li­gence. I’m just say­ing that be­fore you can ra­tio­nally con­sider whether or not gun con­trol is wrong, you need to get past this mode of think­ing about the prob­lem.

In the hopes of us­ing the­ism less of­ten, a bunch of Less Wrongers have agreed that the War on Drugs would make a good stock ex­am­ple of ir­ra­tional­ity. So, why is the War on Drugs so pop­u­lar? I think it’s be­cause drugs are ob­vi­ously BAD. They ad­dict peo­ple, break up their fam­i­lies, de­stroy their health, drive them into poverty, and even­tu­ally kill them. If we’ve got to have a cat­e­gory “drugs”3, and we’ve got to call it ei­ther “good” or “bad”, then “bad” is clearly the way to go. And if drugs are bad, get­ting rid of them would be good! Right?

So how do we avoid all of these prob­lems?

I said at the very be­gin­ning that I think we should switch to solv­ing moral prob­lems through util­i­tar­i­anism. But we can’t do that di­rectly. If we ask util­i­tar­i­anism “Are drugs good or bad?” it re­turns: CATEGORY ERROR. Good for it.

Utili­tar­i­anism can only be ap­plied to states, ac­tions, or de­ci­sions, and it can only re­turn a com­par­a­tive re­sult. Want to know whether stop­ping or di­vert­ing the trol­ley in the Trol­ley Prob­lem would be bet­ter? Utili­tar­i­anism can tell you. That’s be­cause it’s a de­ci­sion be­tween two al­ter­na­tives (al­ter­nate way of look­ing at it: two pos­si­ble ac­tions; or two pos­si­ble states) and all you need to do is figure out which of the two is higher util­ity.

When peo­ple say “Utili­tar­i­anism says slav­ery is bad” or “Utili­tar­i­anism says mur­der is wrong”—well, a util­i­tar­ian would en­dorse those state­ments over their op­po­sites, but it takes a lot of in­ter­pre­ta­tion first. What util­i­tar­i­anism prop­erly says is “In this par­tic­u­lar situ­a­tion, the ac­tion of free­ing the slaves leads to a higher util­ity state than not do­ing so” and pos­si­bly “and the same would be true of any broadly similar situ­a­tion”.

But why in blue blazes can’t we just go ahead and say “slav­ery is bad”? What could pos­si­bly go wrong?

Ask an an­ar­chist. Tax­a­tion of X% means you’re forced to work for X% of the year with­out get­ting paid. There­fore, since slav­ery is “be­ing forced to work with­out pay” tax­a­tion is slav­ery. Since slav­ery is bad, tax­a­tion is bad. There­fore gov­ern­ment is bad and statists are no bet­ter than slave­mas­ters.4

(again, re­versed stu­pidity is not in­tel­li­gence. There are good ar­gu­ments against tax­a­tion. But this is not one of them.)

Emo­tivism is the na­tive ar­chi­tec­ture of the hu­man mind. No one can think like a util­i­tar­ian all the time. But when you are in an Ir­re­solv­able De­bate, util­i­tar­ian think­ing may be­come nec­es­sary to avoid dan­gling vari­able prob­lems around the word “good” (cf. Is­lam is a re­li­gion of peace). Prob­lems that are in­sol­u­ble at the emo­tivist level can be re­duced, sim­plified, and re­solved on the util­i­tar­ian level with enough effort.

I’ve used the ex­am­ple be­fore, and I’ll use it again. Is­rael ver­sus Pales­tine. One per­son can go on and on for months about all the rea­sons the Is­raelis are to­tally right and the Pales­ti­ni­ans are com­pletely in the wrong, and an­other per­son can go on just as long about how the Is­raelis are evil op­pres­sors and the Pales­ti­ni­ans just want free­dom. And then if you ask them about an ac­tion, or a de­ci­sion, or a state—they’ve never thought about it. They’ll both an­swer some­thing like “I dunno, the two-state solu­tion or some­thing?“. And if they still dis­agree at this level, you can sud­denly ap­ply the full power of util­i­tar­i­anism to the prob­lem in a way that tugs side­ways to all of their per­sonal prej­u­dices.

In gen­eral, any de­bate about whether some­thing is “good” or “bad” is sketchy, and can be changed to a more use­ful form by con­vert­ing the thing to an ac­tion and ap­ply­ing util­i­tar­i­anism.

Foot­notes:

1: It should be noted that this karma anal­ogy can’t ex­plain our origi­nal per­cep­tion of good and bad, only the sys­tem we use for com­bin­ing, pro­cess­ing and uti­liz­ing it. My guess is that the origi­nal judg­ment of good or bad takes place through as­so­ci­a­tion with other pre­vi­ously de­ter­mined good or bad things, down to the bot­tom level which are pro­grammed into the or­ganism (ie pain, hunger, death) with some in­put from the ra­tio­nal cen­ters.

2: More ev­i­dence: we tend to like the idea of “good” or “bad” be­ing in­nate qual­ities of ob­jects. Thus the al­ter­na­tive medicine prac­tioner who tells you that real medicine is bad, be­cause it uses scary pun­gent chem­i­cals, which are un­healthy, and al­ter­na­tive medicine is good, be­cause it uses roots and plants and flow­ers, which ev­ery­one likes. Or fan­tasy books, where the Golden Sword of Holy Light can only be wielded for good, and the Dark Sword of De­monic Shadow can only be wielded for evil.

3: Of course, the bat­tle has already been half-lost once you have a cat­e­gory “drugs”. Eliezer once men­tioned some­thing about how con­sid­er­ing {Adolf Hitler, Joe Stalin, John Smith} a nat­u­ral cat­e­gory isn’t go­ing to do John Smith any good, no mat­ter how nice a man he may be. In the cat­e­gory “drugs”, which looks like {co­caine, heroin, LSD, mar­ijuana}, LSD and mar­ijuana get to play the role of John Smith.

4: And, uh, I’m sure Louis XVI would feel the same way. Sorry. I couldn’t think of a bet­ter ex­am­ple.