The Terrible, Horrible, No Good, Very Bad Truth About Morality and What To Do About It

Joshua Greene has a PhD the­sis called The Ter­rible, Hor­rible, No Good, Very Bad Truth About Mo­ral­ity and What To Do About It. What is this ter­rible truth? The essence of this truth is that many, many peo­ple (prob­a­bly most peo­ple) be­lieve that their par­tic­u­lar moral (and ax­iolog­i­cal) views on the world are ob­jec­tively true—for ex­am­ple that any­one who dis­agrees with the state­ment “black peo­ple have the same value as any other hu­man be­ings” has com­mit­ted ei­ther an er­ror of logic or has got some em­piri­cal fact wrong, in the same way that peo­ple who claim that the earth was cre­ated 6000 years ago are ob­jec­tively wrong.

To put it an­other way, Greene’s con­tention is that our en­tire way of talk­ing about ethics—the very words that we use—force us into talk­ing com­plete non­sense (of­ten in a very an­gry way) about ethics. As a sim­ple ex­am­ple, con­sider the use of the words in any stan­dard eth­i­cal de­bate—“abor­tion is mur­der”, “an­i­mal suffer­ing is just as bad as hu­man suffer­ing”—these terms seem to re­fer to ob­jec­tive facts; “abor­tion is mur­der” sounds rather like “wa­ter is a solvent!”. I urge read­ers of Less Wrong to put in the effort of read­ing a sig­nifi­cant part of Greene’s long the­sis start­ing at chap­ter 3: Mo­ral Psy­chol­ogy and Pro­jec­tive Er­ror, con­sid­er­ing the mas­sively im­por­tant reper­cus­sions he claims his ideas could have:

In this es­say I ar­gue that or­di­nary moral thought and lan­guage is, while very nat­u­ral, highly coun­ter­pro­duc­tive and that as a re­sult we would be wise to change the way we think and talk about moral mat­ters. First, I ar­gue on meta­phys­i­cal grounds against moral re­al­ism, the view ac­cord­ing to which there are first or­der moral truths. Se­cond, I draw on prin­ci­ples of moral psy­chol­ogy, cog­ni­tive sci­ence, and evolu­tion­ary the­ory to ex­plain why moral re­al­ism ap­pears to be true even though it is not. I then ar­gue, based on the pic­ture of moral psy­chol­ogy de­vel­oped herein, that re­al­ist moral lan­guage and thought pro­motes mi­s­un­der­stand­ing and ex­ac­er­bates con­flict. I con­sider a num­ber of stan­dard views con­cern­ing the prac­ti­cal im­pli­ca­tions of moral anti-re­al­ism and re­ject them. I then sketch and defend a set of al­ter­na­tive re­vi­sion­ist pro­pos­als for im­prov­ing moral dis­course, chief among them the elimi­na­tion of re­al­ist moral lan­guage, es­pe­cially de­on­tolog­i­cal lan­guage, and the pro­mo­tion of an anti-re­al­ist util­i­tar­ian frame­work for dis­cussing moral is­sues of pub­lic con­cern. I em­pha­size the im­por­tance of re­vis­ing our moral prac­tices, sug­gest­ing that our en­trenched modes of moral thought may be re­spon­si­ble for our failure to solve a num­ber of global so­cial prob­lems.

As an ac­cessible en­try point, I have de­cided to sum­ma­rize what I con­sider to be Greene’s most im­por­tant points in this post. I hope he doesn’t mind—I feel that spread­ing this mes­sage is suffi­ciently ur­gent to jus­tify re­pro­duc­ing large chunks of his dis­ser­ta­tion—Start­ing at page 142:

In the pre­vi­ous chap­ter we con­cluded, in spite of com­mon sense, that moral re­al­ism is false. This raises an im­por­tant ques­tion: How is it that so many peo­ple are mis­taken about the na­ture of moral­ity? To be­come com­fortable with the fact that moral re­al­ism is false we need to un­der­stand how moral re­al­ism can be so wrong but feel so right. …

The cen­tral tenet of pro­jec­tivism is that the moral prop­er­ties we find (or think we find) in things in the world (e.g. moral wrong­ness) are mind-de­pen­dent in a way that other prop­er­ties, those that we’ve called “value-neu­tral” (e.g. sol­u­bil­ity in wa­ter), are not. Whether or not some­thing is sol­u­ble in wa­ter has noth­ing to do with hu­man psy­chol­ogy. But, say pro­jec­tivists, whether or not some­thing is wrong (or “wrong”) has ev­ery­thing to do with hu­man psy­chol­ogy.…

Pro­jec­tivists main­tain that our en­coun­ters with the moral world are, at the very least, some­what mis­lead­ing. Pro­jected prop­er­ties tend to strike us as un­pro­jected. They ap­pear to be re­ally “out there,” in a way that they, un­like typ­i­cal value neu­tral prop­er­ties, are not. …

The re­spec­tive roles of in­tu­ition and rea­son­ing are illu­mi­nated by con­sid­er­ing peo­ple’s re­ac­tions to the fol­low­ing story:

“Julie and Mark are brother and sister. They are trav­el­ling to­gether in France on sum­mer va­ca­tion from col­lege. One night they are stay­ing alone in a cabin near the beach. They de­cided that it would be in­ter­est­ing and fun if they tried mak­ing love. At the very least it would be a new ex­pe­rience for each of them. Julie was already tak­ing birth con­trol pills, but Mark uses a con­dom too, just to be safe. They both en­joy mak­ing love but de­cide not to do it again. They keep that night as a spe­cial se­cret be­tween them, which makes them feel even closer to each other. What do you think about that, was it OK for them to make love?”

Haidt (2001, pg. 814) de­scribes peo­ple’s re­sponses to this story as fol­lows: Most peo­ple who hear the above story im­me­di­ately say that it was wrong for the siblings to make love, and they then set about search­ing for rea­sons. They point out the dan­gers of in­breed­ing, only to re­mem­ber that Julie and Mark used two forms of birth con­trol. They next try to ar­gue that Julie and Mark could be hurt, even though the story makes it clear that no harm befell them. Even­tu­ally many peo­ple say some­thing like

“I don’t know, I can’t ex­plain it, I just know it’s wrong.”

This moral ques­tion is care­fully de­signed to short-cir­cuit the most com­mon rea­son peo­ple give for judg­ing an ac­tion to be wrong, namely harm to self or oth­ers, and in so do­ing it re­veals some­thing about moral psy­chol­ogy, at least as it op­er­ates in cases such at these. Peo­ple’s moral judg­ments in re­sponse to the above story tend to be force­ful, im­me­di­ate, and pro­duced by an un­con­scious pro­cess (in­tu­ition) rather than through the de­liber­ate and effort­ful ap­pli­ca­tion of moral prin­ci­ples (rea­son­ing). When asked to ex­plain why they judged as they did, sub­jects typ­i­cally gave rea­sons. Upon rec­og­niz­ing the flaws in those rea­sons, sub­jects typ­i­cally stood by their judg­ments all the same, sug­gest­ing that the rea­sons they gave af­ter the fact in sup­port their judg­ments had lit­tle to do with the pro­cess that pro­duced those judg­ments. Un­der or­di­nary cir­cum­stances rea­son­ing comes into play af­ter the judg­ment has already been reached in or­der to find ra­tio­nal sup­port for the pre­or­dained judg­ment. When faced with a so­cial de­mand for a ver­bal jus­tifi­ca­tion, one be­comes a lawyer try­ing to build a case rather than a judge search­ing for the truth.

The Illu­sion of Ra­tion­al­ist Psy­chol­ogy (p. 197)

In Sec­tions 3.2-3.4 I de­vel­oped an ex­pla­na­tion for why moral re­al­ism ap­pears to be true, an ex­pla­na­tion fea­tur­ing the Humean no­tion of pro­jec­tivism ac­cord­ing to which we in­tu­itively see var­i­ous things in the world as pos­sess­ing moral prop­er­ties that they do not ac­tu­ally have. This ex­plains why we tend to be re­al­ists, but it doesn’t ex­plain, and to some ex­tent is at odds with, the fol­low­ing cu­ri­ous fact. The so­cial in­tu­ition­ist model is coun­ter­in­tu­itive. Peo­ple tend to be­lieve that moral judg­ments are pro­duced by rea­son­ing even though this is not the case. Why do peo­ple make this mis­take? Con­sider, once again, the case of Mark and Julie, the siblings who de­cided to have sex. Many sub­jects, when asked to ex­plain why Mark and Julie’s be­hav­ior is wrong, en­gaged in “moral dumb­found­ing,” bum­bling efforts to sup­ply rea­sons for their in­tu­itive judg­ments. This need not have been so. It might have turned out that all the sub­jects said things like this right off the bat:

“Why do I say it’s wrong? Be­cause it’s clearly just wrong. Isn’t that plain to see? It’s as if you’re putting a lemon in front of me and ask­ing me why I say it’s yel­low. What more is there to say?”

Per­haps some sub­jects did re­spond like this, but most did not. In­stead, sub­jects typ­i­cally felt the need to por­tray their re­sponses as prod­ucts of rea­son­ing, even though they gen­er­ally dis­cov­ered (of­ten with some em­bar­rass­ment) that they could not eas­ily sup­ply ad­e­quate rea­sons for their judg­ments. On many oc­ca­sions I’ve asked peo­ple to ex­plain why they say that it’s okay to turn the trol­ley onto the other tracks but not okay to push some­one in front of the trol­ley. Rarely do they be­gin by say­ing, “I don’t know why. I just have an in­tu­ition that tells me that it is.” Rather, they tend to start by spin­ning the sorts of the­o­ries that ethi­cists have de­vised, the­o­ries that are nev­er­the­less no­to­ri­ously difficult to defend. In my ex­pe­rience, it is only af­ter a bit of moral dumb­found­ing that peo­ple are will­ing to con­fess that their judg­ments were made in­tu­itively.

Why do peo­ple in­sist on giv­ing rea­sons in sup­port of judg­ments that were made with great con­fi­dence in the ab­sence of rea­sons? I sus­pect it has some­thing to do with the cus­tom com­plexes in which we Western­ers have been im­mersed since child­hood. We live in a rea­son-giv­ing cul­ture. Western in­di­vi­d­u­als are ex­pected to choose their own way, and to do so for good rea­son. Amer­i­can chil­dren, for ex­am­ple, learn about the ra­tio­nal de­sign of their pub­lic in­sti­tu­tions; the all im­por­tant “checks and bal­ances” be­tween the branches of gov­ern­ment, the ju­di­cial sys­tem ac­cord­ing to which ac­cused in­di­vi­d­u­als have a right to a trial dur­ing which they can, if they wish, plead their cases in a ra­tio­nal way, in­evitably with the help of a le­gal ex­pert whose job it is to make per­sua­sive le­gal ar­gu­ments, etc. Western­ers learn about doc­tors who make di­ag­noses and sci­en­tists who, by means of ex­per­i­men­ta­tion, un­lock na­ture’s se­crets. Rea­son­ing isn’t the only game in town, of course. The Amer­i­can Dec­la­ra­tion of In­de­pen­dence fa­mously de­clares “these truths to be self-ev­i­dent,” but Amer­i­can chil­dren are nev­er­the­less given nu­mer­ous rea­sons for the de­ci­sions of their na­tion’s found­ing fathers, for ex­am­ple, the evils of ab­solute monar­chy and the in­jus­tice of “tax­a­tion with­out rep­re­sen­ta­tion.” When Western coun­tries win wars they draft peace treaties ex­plain­ing why they, and not their van­quished foes, were in the right and set up spe­cial courts to try their en­e­mies in a way that makes it clear to all that they pun­ish only with good rea­son. Those seek­ing pub­lic office make speeches ex­plain­ing why they should be elected, some­times as parts of or­ga­nized de­bates. Some peo­ple are bet­ter at rea­son­ing than oth­ers, but ev­ery­one knows that the best peo­ple are the ones who, when asked, can ex­plain why they said what they said and did what they did.

With this in mind, we can imag­ine what might go on when a Westerner makes a typ­i­cal moral judg­ment and is then asked to ex­plain why he said what he said or how he ar­rived at that con­clu­sion. The ques­tion is posed, and he re­sponds in­tu­itively. As sug­gested above, such in­tu­itive re­sponses tend to pre­sent them­selves as per­cep­tual. The sub­ject is per­haps aware of his “gut re­ac­tion,” but he doesn’t take him­self to have merely had a gut re­ac­tion. Rather, he takes him­self to have de­tected a moral prop­erty out in the world, say, the in­her­ent wrong­ness in Mark and Julie’s in­ces­tu­ous be­hav­ior or in shov­ing some­one in front of a mov­ing train. The sub­ject is then asked to ex­plain how he ar­rived at his judg­ment. He could say, “I don’t know. I an­swered in­tu­itively,” and this an­swer would be the most ac­cu­rate an­swer for nearly ev­ery­one. But this is not the an­swer he gives be­cause he knows af­ter a life­time of liv­ing in Western cul­ture that “I don’t know how I reached that con­clu­sion. I just did. But I’m sure it’s right,” doesn’t sound like a very good an­swer. So, in­stead, he asks him­self, “What would be a good rea­son for reach­ing this con­clu­sion?” And then, draw­ing on his rich ex­pe­rience with rea­son-giv­ing and -re­ceiv­ing, he says some­thing that sounds plau­si­ble both as a causal ex­pla­na­tion of and jus­tifi­ca­tion for his judg­ment: “It’s wrong be­cause their chil­dren could turn out to have all kinds of dis­eases,” or, “Well, in the first case the other guy is, like, already in­volved, but in the case where you go ahead and push the guy he’s just there mind­ing his own busi­ness.” Peo­ple’s con­fi­dence that their judg­ments are ob­jec­tively cor­rect com­bined with the pres­sure to give a “good an­swer” leads peo­ple to pro­duce these sorts of post-hoc ex­pla­na­tions/​jus­tifi­ca­tions. Such ex­pla­na­tions need not be the re­sults of de­liber­ate at­tempts at de­cep­tion. The in­di­vi­d­u­als who offer them may them­selves be­lieve that the rea­sons they’ve given af­ter the fact were re­ally their rea­sons all along, what they “re­ally had in mind” in giv­ing those quick re­sponses. …

My guess is that even among philoso­phers par­tic­u­lar moral judg­ments are made first and rea­soned out later. In my ex­pe­rience, philoso­phers are of­ten well aware of the fact that their moral judg­ments are the re­sults of in­tu­ition. As noted above, it’s com­mon­place among ethi­cists to think of their moral the­o­ries as at­tempts to or­ga­nize pre-ex­ist­ing moral in­tu­itions. The mis­take philoso­phers tend to make is in ac­cept­ing ra­tio­nal­ism proper, the view that our moral in­tu­itions (as­sumed to be roughly cor­rect) must be ul­ti­mately jus­tified by some sort of ra­tio­nal the­ory that we’ve yet to dis­cover. For ex­am­ple, philoso­phers are as likely as any­one to think that there must be “some good rea­son” for why it’s okay to turn the trol­ley onto the other set of tracks but not okay to push the per­son in front of the trol­ley, where a “good rea­son,” or course, is a piece of moral the­ory with jus­tifi­ca­tory force and not a piece of psy­cholog­i­cal de­scrip­tion con­cern­ing pat­terns in peo­ple’s emo­tional re­sponses.

One might well ask: why does any of this in­di­cate that moral propo­si­tions have no ra­tio­nal jus­tifi­ca­tion? The ar­gu­ments pre­sented here show fairly con­clu­sively that our moral judge­ments are in­stinc­tive, sub­con­scious, evolved fea­tures. Evolu­tion gave them to us. But read­ers of Eliezer’s ma­te­rial on Over­com­ing Bias will be well aware of the char­ac­ter of evolved solu­tions: they’re guaran­teed to be a mess. Why should evolu­tion have hap­pened to have given us ex­actly those moral in­stincts that give the same con­clu­sions as would have been pro­duced by (say) great moral prin­ci­ple X? (X = the golden rule, or X = he­do­nis­tic util­i­tar­i­anism, or X = nega­tive util­i­tar­i­anism, etc).

Ex­pect­ing evolved moral in­stincts to con­form ex­actly to some sim­ple unify­ing prin­ci­ple is like ex­pect­ing the or­bits of the planets to be in the same pro­por­tion as the first 9 prime num­bers or some­thing. That which is pro­duced by a com­plex, messy, ran­dom pro­cess is un­likely to have some low com­plex­ity de­scrip­tion.

Now I can imag­ine a “from first prin­ci­ples” ar­gu­ment pro­duc­ing an ob­jec­tive moral­ity that has some sim­ple de­scrip­tion—I can imag­ine start­ing from only sim­ple facts about agent­hood and de­riv­ing Kant’s Golden Rule as the one ob­jec­tive moral truth. But I can­not se­ri­o­suly en­ter­tain the prospect of a “from first prin­ci­ples” ar­gu­ment pro­duc­ing the hu­man moral mess. No way. It was this ob­ser­va­tion that fi­nally con­vinced me to aban­don my var­i­ous at­tempts at ob­jec­tive ethics.