Ethical Inhibitions

Fol­lowup to: En­tan­gled Truths, Con­ta­gious Lies, Evolu­tion­ary Psychology

What’s up with that bizarre emo­tion we hu­mans have, this sense of eth­i­cal cau­tion?

One can un­der­stand sex­ual lust, parental care, and even ro­man­tic at­tach­ment. The evolu­tion­ary psy­chol­ogy of such emo­tions might be sub­tler than it at first ap­pears, but if you ig­nore the sub­tleties, the sur­face rea­sons are ob­vi­ous. But why a sense of eth­i­cal cau­tion? Why honor, why righ­teous­ness? (And no, it’s not group se­lec­tion; it never is.) What re­pro­duc­tive benefit does that provide?

The spe­cific eth­i­cal codes that peo­ple feel un­easy vi­o­lat­ing, vary from tribe to tribe (though there are cer­tain reg­u­lar­i­ties). But the emo­tion as­so­ci­ated with feel­ing eth­i­cally in­hibited—well, I Am Not An Evolu­tion­ary An­thro­pol­o­gist, but that looks like a hu­man uni­ver­sal to me, some­thing with brain­ware sup­port.

The ob­vi­ous story be­hind proso­cial emo­tions in gen­eral, is that those who offend against the group are sanc­tioned; this con­verts the emo­tion to an in­di­vi­d­ual re­pro­duc­tive ad­van­tage. The hu­man or­ganism, ex­e­cut­ing the eth­i­cal-cau­tion adap­ta­tion, ends up avoid­ing the group sanc­tions that would fol­low a vi­o­la­tion of the code. This ob­vi­ous an­swer may even be the en­tire an­swer.

But I sug­gest—if a bit more ten­ta­tively than usual—that by the time hu­man be­ings were evolv­ing the emo­tion as­so­ci­ated with “eth­i­cal in­hi­bi­tion”, we were already in­tel­li­gent enough to ob­serve the ex­is­tence of such things as group sanc­tions. We were already smart enough (I sug­gest) to model what the group would pun­ish, and to fear that pun­ish­ment.

So­ciopaths have a con­cept of get­ting caught, and they try to avoid get­ting caught. Why isn’t this suffi­cient? Why have an ex­tra emo­tion, a feel­ing that in­hibits you even when you don’t ex­pect to be caught? Wouldn’t this, from evolu­tion’s per­spec­tive, just re­sult in pass­ing up perfectly good op­por­tu­ni­ties?

So I sug­gest (ten­ta­tively) that hu­mans nat­u­rally un­der­es­ti­mate the odds of get­ting caught. We don’t fore­see all the pos­si­ble chains of causal­ity, all the en­tan­gled facts that can bring ev­i­dence against us. Those an­ces­tors who lacked a sense of eth­i­cal cau­tion stole the silver­ware when they ex­pected that no one would catch them or pun­ish them; and were nonethe­less caught or pun­ished of­ten enough, on av­er­age, to out­weigh the value of the silver­ware.

Ad­mit­tedly, this may be an un­nec­es­sary as­sump­tion. It is a gen­eral idiom of biol­ogy that evolu­tion is the only long-term con­se­quen­tial­ist; or­ganisms com­pute short-term re­wards. Ho­minids vi­o­late this rule, but that is a very re­cent in­no­va­tion.

So one could counter-ar­gue: “Early hu­mans didn’t re­li­ably fore­cast the pun­ish­ment that fol­lows from break­ing so­cial codes, so they didn’t re­li­ably think con­se­quen­tially about it, so they de­vel­oped an in­stinct to obey the codes.” Maybe the mod­ern so­ciopaths that evade be­ing caught are smarter than av­er­age. Or mod­ern so­ciopaths are bet­ter ed­u­cated than hunter-gath­erer so­ciopaths. Or mod­ern so­ciopaths get more sec­ond chances to re­cover from ini­tial stum­bles—they can change their name and move. It’s not so strange to find an emo­tion ex­e­cut­ing in some ex­cep­tional cir­cum­stance where it fails to provide a re­pro­duc­tive benefit.

But I feel jus­tified in bring­ing up the more com­pli­cated hy­poth­e­sis, be­cause eth­i­cal in­hi­bi­tions are archety­pal­lythat which stops us even when we think no one is look­ing. A hu­manly uni­ver­sal con­cept, so far as I know, though I am not an an­thro­pol­o­gist.

Eth­i­cal in­hi­bi­tion, as a hu­man mo­ti­va­tion, seems to be im­ple­mented in a dis­tinct style from hunger or lust. Hunger and lust can be out­weighed when stronger de­sires are at stake; but the emo­tion as­so­ci­ated with eth­i­cal pro­hi­bi­tions tries to as­sert it­self de­on­tolog­i­cally. If you have the sense at all that you shouldn’t do it, you have the sense that you un­con­di­tion­ally shouldn’t do it. The emo­tion as­so­ci­ated with eth­i­cal cau­tion would seem to be a drive that—suc­cess­fully or un­suc­cess­fully—tries to over­ride the temp­ta­tion, not just weigh against it.

A mon­key can be trapped by a food re­ward in­side a hol­lowed shell—they can reach in eas­ily enough, but once they close their fist, they can’t take their hand out. The mon­key may be scream­ing with dis­tress, and still be un­able to over­ride the in­stinct to keep hold of the food. We hu­mans can do bet­ter than that; we can let go of the food re­ward and run away, when our brain is warn­ing us of the long-term con­se­quences.

But why does the sen­sa­tion of eth­i­cal in­hi­bi­tion, that might also com­mand us to pass up a food re­ward, have a similar over­ride-qual­ity—even in the ab­sence of ex­plic­itly ex­pected long-term con­se­quences? Is it just that eth­i­cal emo­tions evolved re­cently, and hap­pen to be im­ple­mented in pre­frontal cor­tex next to the long-term-over­ride cir­cuitry?

What is this ten­dency to feel in­hibited from steal­ing the food re­ward? This mes­sage that tries to as­sert “I over­ride”, not just “I weigh against”? Even when we don’t ex­pect the long-term con­se­quences of be­ing dis­cov­ered?

And be­fore you think that I’m fal­ling prey to some kind of ap­peal­ing story, ask your­self why that par­tic­u­lar story would sound ap­peal­ing to hu­mans. Why would it seem tempt­ingly vir­tu­ous to let an eth­i­cal in­hi­bi­tion over­ride, rather than just be­ing one more weight in the bal­ance?

One pos­si­ble ex­pla­na­tion would be if the emo­tion were carved out by the evolu­tion­ary-his­tor­i­cal statis­tics of a black-swan bet.

Maybe you will, in all prob­a­bil­ity, get away with steal­ing the silver­ware on any par­tic­u­lar oc­ca­sion—just as your model of the world would ex­trap­o­late. But it was a statis­ti­cal fact about your an­ces­tors that some­times the en­vi­ron­ment didn’t op­er­ate the way they ex­pected. Some­one was watch­ing from be­hind the trees. On those oc­ca­sions their rep­u­ta­tion was per­ma­nently black­ened; they lost sta­tus in the tribe, and per­haps were out­cast or mur­dered. Such oc­ca­sions could be statis­ti­cally rare, and still coun­ter­bal­ance the benefit of a few silver spoons.

The brain, like ev­ery other or­gan in the body, is a re­pro­duc­tive or­gan: it was carved out of en­tropy by the per­sis­tence of mu­ta­tions that pro­moted re­pro­duc­tive fit­ness. And yet some­how, amaz­ingly, the hu­man brain wound up with cir­cuitry for such things as honor, sym­pa­thy, and eth­i­cal re­sis­tance to temp­ta­tions.

Which means that those alle­les drove their al­ter­na­tives to ex­tinc­tion. Hu­mans, the or­ganisms, can be nice to each other; but the alle­les’ game of fre­quen­cies is zero-sum. Honor­able an­ces­tors didn’t nec­es­sar­ily kill the dishon­or­able ones. But if, by co­op­er­at­ing with each other, hon­or­able an­ces­tors out­re­pro­duced less hon­or­able folk, then the honor allele kil­led the dishonor allele as surely as if it erased the DNA se­quence off a black­board.

That might be some­thing to think about, the next time you’re won­der­ing if you should just give in to your eth­i­cal im­pulses, or try to over­ride them with your ra­tio­nal aware­ness.

Espe­cially if you’re tempted to en­gage in some chi­canery “for the greater good”—tempted to de­cide that the end jus­tifies the means. Evolu­tion doesn’t care about whether some­thing ac­tu­ally pro­motes the greater good—that’s not how gene fre­quen­cies change. But if trans­gres­sive plans go awry of­ten enough to hurt the trans­gres­sor, how much more of­ten would they go awry and hurt the in­tended benefi­cia­ries?

His­tor­i­cally speak­ing, it seems likely that, of those who set out to rob banks or mur­der op­po­nents “in a good cause”, those who man­aged to hurt them­selves, mostly wouldn’t make the his­tory books. (Un­less they got a sec­ond chance, like Hitler af­ter the failed Beer Hall Putsch.) Of those cases we do read about in the his­tory books, many peo­ple have done very well for them­selves out of their plans to lie and rob and mur­der “for the greater good”. But how many peo­ple cheated their way to ac­tual huge al­tru­is­tic benefits—cheated and ac­tu­ally re­al­ized the jus­tify­ing greater good? Surely there must be at least one or two cases known to his­tory—at least one king some­where who took power by lies and as­sas­si­na­tion, and then ruled wisely and well—but I can’t ac­tu­ally name a case off the top of my head. By and large, it seems to me a pretty fair gen­er­al­iza­tion that peo­ple who achieve great good ends man­age not to find ex­cuses for all that much evil along the way.

Some­how, peo­ple seem much more likely to en­dorse plans that in­volve just a lit­tle pain for some­one else, on be­half of the greater good, than to work out a way to let the sac­ri­fice be them­selves. But when you plan to dam­age so­ciety in or­der to save it, re­mem­ber that your brain con­tains a sense of eth­i­cal un­ease that evolved from trans­gres­sive plans blow­ing up and dam­ag­ing the origi­na­tor—never mind the ex­pected value of all the dam­age done to other peo­ple, if you re­ally do care about them.

If nat­u­ral se­lec­tion, which doesn’t care at all about the welfare of un­re­lated strangers, still man­ages to give you a sense of eth­i­cal un­ease on ac­count of trans­gres­sive plans not always go­ing as planned—then how much more re­luc­tant should you be to rob banks for a good cause, if you as­pire to ac­tu­ally help and pro­tect oth­ers?

Part of the se­quence Eth­i­cal Injunctions

Next post: “Eth­i­cal In­junc­tions

Pre­vi­ous post: “Pro­tected From My­self