Ethical Inhibitions

Followup to: Entangled Truths, Contagious Lies, Evolutionary Psychology

What’s up with that bizarre emotion we humans have, this sense of ethical caution?

One can understand sexual lust, parental care, and even romantic attachment. The evolutionary psychology of such emotions might be subtler than it at first appears, but if you ignore the subtleties, the surface reasons are obvious. But why a sense of ethical caution? Why honor, why righteousness? (And no, it’s not group selection; it never is.) What reproductive benefit does that provide?

The specific ethical codes that people feel uneasy violating, vary from tribe to tribe (though there are certain regularities). But the emotion associated with feeling ethically inhibited—well, I Am Not An Evolutionary Anthropologist, but that looks like a human universal to me, something with brainware support.

The obvious story behind prosocial emotions in general, is that those who offend against the group are sanctioned; this converts the emotion to an individual reproductive advantage. The human organism, executing the ethical-caution adaptation, ends up avoiding the group sanctions that would follow a violation of the code. This obvious answer may even be the entire answer.

But I suggest—if a bit more tentatively than usual—that by the time human beings were evolving the emotion associated with “ethical inhibition”, we were already intelligent enough to observe the existence of such things as group sanctions. We were already smart enough (I suggest) to model what the group would punish, and to fear that punishment.

Sociopaths have a concept of getting caught, and they try to avoid getting caught. Why isn’t this sufficient? Why have an extra emotion, a feeling that inhibits you even when you don’t expect to be caught? Wouldn’t this, from evolution’s perspective, just result in passing up perfectly good opportunities?

So I suggest (tentatively) that humans naturally underestimate the odds of getting caught. We don’t foresee all the possible chains of causality, all the entangled facts that can bring evidence against us. Those ancestors who lacked a sense of ethical caution stole the silverware when they expected that no one would catch them or punish them; and were nonetheless caught or punished often enough, on average, to outweigh the value of the silverware.

Admittedly, this may be an unnecessary assumption. It is a general idiom of biology that evolution is the only long-term consequentialist; organisms compute short-term rewards. Hominids violate this rule, but that is a very recent innovation.

So one could counter-argue: “Early humans didn’t reliably forecast the punishment that follows from breaking social codes, so they didn’t reliably think consequentially about it, so they developed an instinct to obey the codes.” Maybe the modern sociopaths that evade being caught are smarter than average. Or modern sociopaths are better educated than hunter-gatherer sociopaths. Or modern sociopaths get more second chances to recover from initial stumbles—they can change their name and move. It’s not so strange to find an emotion executing in some exceptional circumstance where it fails to provide a reproductive benefit.

But I feel justified in bringing up the more complicated hypothesis, because ethical inhibitions are archetypallythat which stops us even when we think no one is looking. A humanly universal concept, so far as I know, though I am not an anthropologist.

Ethical inhibition, as a human motivation, seems to be implemented in a distinct style from hunger or lust. Hunger and lust can be outweighed when stronger desires are at stake; but the emotion associated with ethical prohibitions tries to assert itself deontologically. If you have the sense at all that you shouldn’t do it, you have the sense that you unconditionally shouldn’t do it. The emotion associated with ethical caution would seem to be a drive that—successfully or unsuccessfully—tries to override the temptation, not just weigh against it.

A monkey can be trapped by a food reward inside a hollowed shell—they can reach in easily enough, but once they close their fist, they can’t take their hand out. The monkey may be screaming with distress, and still be unable to override the instinct to keep hold of the food. We humans can do better than that; we can let go of the food reward and run away, when our brain is warning us of the long-term consequences.

But why does the sensation of ethical inhibition, that might also command us to pass up a food reward, have a similar override-quality—even in the absence of explicitly expected long-term consequences? Is it just that ethical emotions evolved recently, and happen to be implemented in prefrontal cortex next to the long-term-override circuitry?

What is this tendency to feel inhibited from stealing the food reward? This message that tries to assert “I override”, not just “I weigh against”? Even when we don’t expect the long-term consequences of being discovered?

And before you think that I’m falling prey to some kind of appealing story, ask yourself why that particular story would sound appealing to humans. Why would it seem temptingly virtuous to let an ethical inhibition override, rather than just being one more weight in the balance?

One possible explanation would be if the emotion were carved out by the evolutionary-historical statistics of a black-swan bet.

Maybe you will, in all probability, get away with stealing the silverware on any particular occasion—just as your model of the world would extrapolate. But it was a statistical fact about your ancestors that sometimes the environment didn’t operate the way they expected. Someone was watching from behind the trees. On those occasions their reputation was permanently blackened; they lost status in the tribe, and perhaps were outcast or murdered. Such occasions could be statistically rare, and still counterbalance the benefit of a few silver spoons.

The brain, like every other organ in the body, is a reproductive organ: it was carved out of entropy by the persistence of mutations that promoted reproductive fitness. And yet somehow, amazingly, the human brain wound up with circuitry for such things as honor, sympathy, and ethical resistance to temptations.

Which means that those alleles drove their alternatives to extinction. Humans, the organisms, can be nice to each other; but the alleles’ game of frequencies is zero-sum. Honorable ancestors didn’t necessarily kill the dishonorable ones. But if, by cooperating with each other, honorable ancestors outreproduced less honorable folk, then the honor allele killed the dishonor allele as surely as if it erased the DNA sequence off a blackboard.

That might be something to think about, the next time you’re wondering if you should just give in to your ethical impulses, or try to override them with your rational awareness.

Especially if you’re tempted to engage in some chicanery “for the greater good”—tempted to decide that the end justifies the means. Evolution doesn’t care about whether something actually promotes the greater good—that’s not how gene frequencies change. But if transgressive plans go awry often enough to hurt the transgressor, how much more often would they go awry and hurt the intended beneficiaries?

Historically speaking, it seems likely that, of those who set out to rob banks or murder opponents “in a good cause”, those who managed to hurt themselves, mostly wouldn’t make the history books. (Unless they got a second chance, like Hitler after the failed Beer Hall Putsch.) Of those cases we do read about in the history books, many people have done very well for themselves out of their plans to lie and rob and murder “for the greater good”. But how many people cheated their way to actual huge altruistic benefits—cheated and actually realized the justifying greater good? Surely there must be at least one or two cases known to history—at least one king somewhere who took power by lies and assassination, and then ruled wisely and well—but I can’t actually name a case off the top of my head. By and large, it seems to me a pretty fair generalization that people who achieve great good ends manage not to find excuses for all that much evil along the way.

Somehow, people seem much more likely to endorse plans that involve just a little pain for someone else, on behalf of the greater good, than to work out a way to let the sacrifice be themselves. But when you plan to damage society in order to save it, remember that your brain contains a sense of ethical unease that evolved from transgressive plans blowing up and damaging the originator—never mind the expected value of all the damage done to other people, if you really do care about them.

If natural selection, which doesn’t care at all about the welfare of unrelated strangers, still manages to give you a sense of ethical unease on account of transgressive plans not always going as planned—then how much more reluctant should you be to rob banks for a good cause, if you aspire to actually help and protect others?

Part of the sequence Ethical Injunctions

Next post: “Ethical Injunctions

Previous post: “Protected From Myself