In­ter­per­sonal Morality

Fol­lowup to: The Bed­rock of Fairness

Every time I won­der if I really need to do so much prep work to ex­plain an idea, I man­age to for­get some minor thing and a dozen people promptly post ob­jec­tions.

In this case, I seem to have for­got­ten to cover the topic of how mor­al­ity ap­plies to more than one per­son at a time.

Stop laugh­ing, it’s not quite as dumb an over­sight as it sounds. Sort of like how some people ar­gue that mac­roe­co­nom­ics should be con­struc­ted from mi­croe­co­nom­ics, I tend to see in­ter­per­sonal mor­al­ity as con­struc­ted from per­sonal mor­al­ity. (And def­in­itely not the other way around!)

In “The Bed­rock of Fair­ness” I offered a situ­ation where three people dis­cover a pie, and one of them in­sists that they want half. This is ac­tu­ally toned down from an older dia­logue where five people dis­cover a pie, and one of them—re­gard­less of any ar­gu­ment offered—in­sists that they want the whole pie.

Let’s con­sider the lat­ter situ­ation: Den­nis wants the whole pie. Not only that, Den­nis says that it is “fair” for him to get the whole pie, and that the “right” way to re­solve this group dis­agree­ment is for him to get the whole pie; and he goes on say­ing this no mat­ter what ar­gu­ments are offered him.

This group is not go­ing to agree, no mat­ter what. But I would, non­ethe­less, say that the right thing to do, the fair thing to do, is to give Den­nis one-fifth of the pie—the other four com­bin­ing to hold him off by force, if ne­ces­sary, if he tries to take more.

A ter­min­o­lo­gical note:

In this series of posts I have been us­ing “mor­al­ity” to mean some­thing more like “the sum of all val­ues and valu­ation rules”, not just “val­ues that ap­ply to in­ter­ac­tions between people”.

The or­din­ary us­age would have that jump­ing on a tram­po­line is not “mor­al­ity”, it is just some selfish fun. On the other hand, giv­ing someone else a turn to jump on the tram­po­line, is more akin to “mor­al­ity” in com­mon us­age; and if you say “Every­one should take turns!” that’s def­in­itely “mor­al­ity”.

But the thing-I-want-to-talk-about in­cludes the Fun The­ory of a single per­son jump­ing on a tram­po­line.

Think of what a dis­aster it would be if all fun were re­moved from hu­man civil­iz­a­tion! So I con­sider it quite right to jump on a tram­po­line. Even if one would not say, in or­din­ary con­ver­sa­tion, “I am jump­ing on that tram­po­line be­cause I have a moral ob­lig­a­tion to do so.” (Indeed, that sounds rather dull, and not at all fun, which is an­other im­port­ant ele­ment of my “mor­al­ity”.)

Alas, I do get the im­pres­sion that in a stand­ard aca­demic dis­cus­sion, one would use the term “mor­al­ity” to refer to the sum-of-all-valu(ation rul)es that I am talk­ing about. If there’s a stand­ard al­tern­at­ive term in moral philo­sophy then do please let me know.

If there’s a bet­ter term than “mor­al­ity” for the sum of all val­ues and valu­ation rules, then this would free up “mor­al­ity” for in­ter­per­sonal val­ues, which is closer to the com­mon us­age.

Some years ago, I was pon­der­ing what to say to the old cyn­ical ar­gu­ment: If two mon­keys want the same ba­nana, in the end one will have it, and the other will cry mor­al­ity. I think the par­tic­u­lar con­text was about whether the word “rights”, as in the con­text of “in­di­vidual rights”, meant any­thing. It had just been vehe­mently as­ser­ted (on the Ex­tropi­ans mail­ing list, I think) that this concept was mean­ing­less and ought to be tossed out the win­dow.

Sup­pose there are two people, a Mug­ger and a Muggee. The Mug­ger wants to take the Muggee’s wal­let. The Muggee doesn’t want to give it to him. A cynic might say: “There is noth­ing more to say than this; they dis­agree. What use is it for the Muggee to claim that he has an in­di­vidual_right to keep his wal­let? The Mug­ger will just claim that he has an in­di­vidual_right to take the wal­let.”

Now today I might in­tro­duce the no­tion of a 1-place versus 2-place func­tion, and reply to the cynic, “Either they do not mean the same thing by in­di­vidual_right, or at least one of them is very mis­taken about what their com­mon mor­al­ity im­plies.” At most one of these people is con­trolled by a good ap­prox­im­a­tion of what I name when I say “mor­al­ity”, and the other one is def­in­itely not.

But the cynic might just say again, “So what? That’s what you say. The Mug­ger could just say the op­pos­ite. What mean­ing is there in such claims? What dif­fer­ence does it make?”

So I came up with this reply: “Sup­pose that I hap­pen along this mug­ging. I will de­cide to side with the Muggee, not the Mug­ger, be­cause I have the no­tion that the Mug­ger is in­ter­fer­ing with the Muggee’s in­di­vidual_right to keep his wal­let, rather than the Muggee in­ter­fer­ing with the Mug­ger’s in­di­vidual_right to take it. And if a fourth per­son comes along, and must de­cide whether to al­low my in­ter­ven­tion, or al­tern­at­ively stop me from treat­ing on the Mug­ger’s in­di­vidual_right to take the wal­let, then they are likely to side with the idea that I can in­ter­vene against the Mug­ger, in sup­port of the Muggee.”

Now this does not work as a metaethics; it does not work to define the word should. If you fell back­ward in time, to an era when no one on Earth thought that slavery was wrong, you should still help slaves es­cape their own­ers. Indeed, the era when such an act was done in heroic de­fi­ance of so­ci­ety and the law, was not so very long ago.

But to de­fend the no­tion of in­di­vidual_rights against the charge of mean­ing­less­ness, the no­tion of third-party in­ter­ven­tions and fourth-party al­low­ances of those in­ter­ven­tions, seems to me to co­her­ently cash out what is as­ser­ted when we as­sert that an in­di­vidual_right ex­ists. To as­sert that someone has a right to keep their wal­let, is to as­sert that third parties should help them keep it, and that fourth parties should ap­plaud those who thus help.

This per­spect­ive does make a good deal of what is said about in­di­vidual_rights into non­sense. “Every­one has a right to be free from star­va­tion!” Um, who are you talk­ing to? Nature? Per­haps you mean, “If you’re starving, and someone else has a ham­burger, I’ll help you take it.” If so, you should say so clearly. (See also The Death of Com­mon Sense.)

So that is a no­tion of in­di­vidual_rights, but what does it have to do with the more gen­eral ques­tion of in­ter­per­sonal mor­al­ity?

The no­tion is that you can con­struct in­ter­per­sonal mor­al­ity out of in­di­vidual mor­al­ity. Just as, in this par­tic­u­lar ex­ample, I con­struc­ted the no­tion of what is as­ser­ted by talk­ing about an in­di­vidual_right, by mak­ing it an as­ser­tion about whether third parties should de­cide, for them­selves, to in­te­fere; and whether fourth parties should, in­di­vidu­ally, de­cide to ap­plaud the in­ter­fer­ence.

Why go to such lengths to define things in in­di­vidual terms? Some people might say: “To as­sert the ex­ist­ence of a right, is to say what so­ci­ety should do.”

But so­ci­et­ies don’t al­ways agree on things. And then you, as an in­di­vidual, will have to de­cide what’s right for you to do, in that case.

“But in­di­vidu­als don’t al­ways agree within them­selves, either,” you say. “They have emo­tional con­flicts.”

Well… you could say that and it would sound wise. But gen­er­ally speak­ing, neur­o­lo­gic­ally in­tact hu­mans will end up do­ing some par­tic­u­lar thing. As op­posed to flop­ping around on the floor as their limbs twitch in dif­fer­ent dir­ec­tions un­der the tem­por­ary con­trol of dif­fer­ent per­son­al­it­ies. Con­trast to a gov­ern­ment or a cor­por­a­tion.

A hu­man brain is a co­her­ently ad­ap­ted sys­tem whose parts have been to­gether op­tim­ized for a com­mon cri­terion of fit­ness (more or less). A group is not func­tion­ally op­tim­ized as a group. (You can verify this very quickly by look­ing at the sex ra­tios in a ma­ter­nity hos­pital.) In­di­vidu­als may be op­tim­ized to do well out of their col­lect­ive in­ter­ac­tion—but that is quite a dif­fer­ent se­lec­tion pres­sure, the ad­apt­a­tions for which do not al­ways pro­duce group agree­ment! So if you want to look at a co­her­ent de­cision sys­tem, it really is a good idea to look at one hu­man, rather than a bur­eau­cracy.

I my­self am one per­son—ad­mit­tedly with a long trail of hu­man his­tory be­hind me that makes me what I am, maybe more than any thoughts I ever thought my­self. But still, at the end of the day, I am writ­ing this blog post; it is not the ne­go­ti­ated out­put of a con­sor­tium. It is quite easy for me to ima­gine be­ing faced, as an in­di­vidual, with a case where the local group does not agree within it­self—and in such a case I must de­cide, as an in­di­vidual, what is right. In gen­eral I must de­cide what is right! If I go along with the group that does not ab­solve me of re­spons­ib­il­ity. If there are any coun­tries that think dif­fer­ently, they can write their own blog posts.

This per­spect­ive, which does not ex­hibit un­defined be­ha­vior in the event of a group dis­agree­ment, is one reason why I tend to treat in­ter­per­sonal mor­al­ity as a spe­cial case of in­di­vidual mor­al­ity, and not the other way around.

Now, with that said, in­ter­per­sonal mor­al­ity is a highly dis­tin­guish­able spe­cial case of mor­al­ity.

As hu­mans, we don’t just hunt in groups, we ar­gue in groups. We’ve prob­ably been ar­guing lin­guist­ic­ally in ad­apt­ive polit­ical con­texts for long enough—hun­dreds of thou­sands of years, maybe mil­lions—to have ad­ap­ted spe­cific­ally to that se­lec­tion pres­sure.

So it shouldn’t be all that sur­pris­ing if we have moral in­tu­itions, like fair­ness, that ap­ply spe­cific­ally to the mor­al­ity of groups.

One of these in­tu­itions seems to be uni­ver­sal­iz­ab­il­ity.

If Den­nis just strides around say­ing, “I want the whole pie! Give me the whole pie! What’s fair is for me to get the whole pie! Not you, me!” then that’s not go­ing to per­suade any­one else in the tribe. Den­nis has not man­aged to frame his de­sires in a form which en­able them to leap from one mind to an­other. His de­sires will not take wings and be­come in­ter­per­sonal. He is not likely to leave many off­spring.

Now, the evol­u­tion of in­ter­per­sonal moral in­tu­itions, is a topic which (he said, smil­ing grimly) de­serves its own blog post. And its own aca­demic sub­field. (Anything out there be­sides The Evolu­tion­ary Ori­gins of Mor­al­ity? It seemed to me very ba­sic.)

But I do think it worth not­ing that, rather than try­ing to ma­nip­u­late 2-per­son and 3-per­son and 7-per­son in­ter­ac­tions, some of our moral in­stincts seem to have made the leap to N-per­son in­ter­ac­tions. We just think about gen­eral moral ar­gu­ments. As though the val­ues that leap from mind to mind, take on a life of their own and be­come some­thing that you can reason about. To the ex­tent that every­one in your en­vir­on­ment does share some val­ues, this will work as ad­apt­ive cog­ni­tion. This cre­ates moral in­tu­itions that are not just in­ter­per­sonal but transper­sonal.

Transper­sonal moral in­tu­itions are not ne­ces­sar­ily false-to-fact, so long as you don’t ex­pect your ar­gu­ments cast in “uni­ver­sal” terms to sway a rock. There really is such a thing as the psy­cho­lo­gical unity of hu­man­kind. Read a mor­al­ity tale from an en­tirely dif­fer­ent cul­ture; I bet you can fig­ure out what it’s try­ing to ar­gue for, even if you don’t agree with it.

The prob­lem arises when you try to ap­ply the uni­ver­sal­iz­ab­il­ity in­stinct to say, “If this ar­gu­ment could not per­suade an UnFriendly AI that tries to max­im­ize the num­ber of pa­per­clips in the uni­verse, then it must not be a good ar­gu­ment.”

There are No Univer­sally Com­pel­ling Ar­gu­ments, so if you try to ap­ply the uni­ver­sal­iz­ab­il­ity in­stinct uni­ver­sally, you end up with no mor­al­ity. Not even uni­ver­sal­iz­ab­il­ity; the pa­per­clip max­im­izer has no in­tu­ition of uni­ver­sal­iz­ab­il­ity. It just chooses that ac­tion which leads to a fu­ture con­tain­ing the max­imum num­ber of pa­per­clips.

There are some things you just can’t have a moral con­ver­sa­tion with. There is not that within them that could re­spond to your ar­gu­ments. You should think twice and maybe three times be­fore ever say­ing this about one of your fel­low hu­mans—but a pa­per­clip max­im­izer is an­other mat­ter. You’ll just have to over­ride your moral in­stinct to re­gard any­thing labeled a “mind” as a little float­ing ghost-in-the-ma­chine, with a hid­den core of per­fect empti­ness, which could surely be per­suaded to re­ject its mis­taken source code if you just came up with the right ar­gu­ment. If you’re go­ing to pre­serve uni­ver­sal­iz­ab­il­ity as an in­tu­ition, you can try ex­tend­ing it to all hu­mans; but you can’t ex­tend it to rocks or chat­bots, nor even power­ful op­tim­iz­a­tion pro­cesses like evol­u­tions or pa­per­clip max­im­izers.

The ques­tion of how much in-prin­ciple agree­ment would ex­ist among hu­man be­ings about the transper­sonal por­tion of their val­ues, given per­fect know­ledge of the facts and per­haps a much wider search of the ar­gu­ment space, is not a mat­ter on which we can get much evid­ence by ob­serving the pre­val­ence of moral agree­ment and dis­agree­ment in today’s world. Any dis­agree­ment might be some­thing that the truth could des­troyde­pend­ent on a dif­fer­ent view of how the world is, or maybe just de­pend­ent on hav­ing not yet heard the right ar­gu­ment. It is also pos­sible that know­ing more could dis­pel il­lu­sions of moral agree­ment, not just pro­duce new ac­cords.

But does that ques­tion really make much dif­fer­ence in day-to-day moral reas­on­ing, if you’re not try­ing to build a Friendly AI?

Part of The Metaethics Sequence

Next post: “Mor­al­ity as Fixed Com­pu­ta­tion

Pre­vi­ous post: “The Mean­ing of Right