Interpersonal Morality

Fol­lowup to: The Be­drock of Fairness

Every time I won­der if I re­ally need to do so much prep work to ex­plain an idea, I man­age to for­get some minor thing and a dozen peo­ple promptly post ob­jec­tions.

In this case, I seem to have for­got­ten to cover the topic of how moral­ity ap­plies to more than one per­son at a time.

Stop laugh­ing, it’s not quite as dumb an over­sight as it sounds. Sort of like how some peo­ple ar­gue that macroe­co­nomics should be con­structed from microe­co­nomics, I tend to see in­ter­per­sonal moral­ity as con­structed from per­sonal moral­ity. (And definitely not the other way around!)

In “The Be­drock of Fair­ness” I offered a situ­a­tion where three peo­ple dis­cover a pie, and one of them in­sists that they want half. This is ac­tu­ally toned down from an older di­alogue where five peo­ple dis­cover a pie, and one of them—re­gard­less of any ar­gu­ment offered—in­sists that they want the whole pie.

Let’s con­sider the lat­ter situ­a­tion: Den­nis wants the whole pie. Not only that, Den­nis says that it is “fair” for him to get the whole pie, and that the “right” way to re­solve this group dis­agree­ment is for him to get the whole pie; and he goes on say­ing this no mat­ter what ar­gu­ments are offered him.

This group is not go­ing to agree, no mat­ter what. But I would, nonethe­less, say that the right thing to do, the fair thing to do, is to give Den­nis one-fifth of the pie—the other four com­bin­ing to hold him off by force, if nec­es­sary, if he tries to take more.

A ter­minolog­i­cal note:

In this se­ries of posts I have been us­ing “moral­ity” to mean some­thing more like “the sum of all val­ues and val­u­a­tion rules”, not just “val­ues that ap­ply to in­ter­ac­tions be­tween peo­ple”.

The or­di­nary us­age would have that jump­ing on a tram­poline is not “moral­ity”, it is just some self­ish fun. On the other hand, giv­ing some­one else a turn to jump on the tram­poline, is more akin to “moral­ity” in com­mon us­age; and if you say “Every­one should take turns!” that’s definitely “moral­ity”.

But the thing-I-want-to-talk-about in­cludes the Fun The­ory of a sin­gle per­son jump­ing on a tram­poline.

Think of what a dis­aster it would be if all fun were re­moved from hu­man civ­i­liza­tion! So I con­sider it quite right to jump on a tram­poline. Even if one would not say, in or­di­nary con­ver­sa­tion, “I am jump­ing on that tram­poline be­cause I have a moral obli­ga­tion to do so.” (In­deed, that sounds rather dull, and not at all fun, which is an­other im­por­tant el­e­ment of my “moral­ity”.)

Alas, I do get the im­pres­sion that in a stan­dard aca­demic dis­cus­sion, one would use the term “moral­ity” to re­fer to the sum-of-all-valu(ation rul)es that I am talk­ing about. If there’s a stan­dard al­ter­na­tive term in moral philos­o­phy then do please let me know.

If there’s a bet­ter term than “moral­ity” for the sum of all val­ues and val­u­a­tion rules, then this would free up “moral­ity” for in­ter­per­sonal val­ues, which is closer to the com­mon us­age.

Some years ago, I was pon­der­ing what to say to the old cyn­i­cal ar­gu­ment: If two mon­keys want the same ba­nana, in the end one will have it, and the other will cry moral­ity. I think the par­tic­u­lar con­text was about whether the word “rights”, as in the con­text of “in­di­vi­d­ual rights”, meant any­thing. It had just been ve­he­mently as­serted (on the Ex­tropi­ans mailing list, I think) that this con­cept was mean­ingless and ought to be tossed out the win­dow.

Sup­pose there are two peo­ple, a Mug­ger and a Muggee. The Mug­ger wants to take the Muggee’s wallet. The Muggee doesn’t want to give it to him. A cynic might say: “There is noth­ing more to say than this; they dis­agree. What use is it for the Muggee to claim that he has an in­di­vi­d­ual_right to keep his wallet? The Mug­ger will just claim that he has an in­di­vi­d­ual_right to take the wallet.”

Now to­day I might in­tro­duce the no­tion of a 1-place ver­sus 2-place func­tion, and re­ply to the cynic, “Either they do not mean the same thing by in­di­vi­d­ual_right, or at least one of them is very mis­taken about what their com­mon moral­ity im­plies.” At most one of these peo­ple is con­trol­led by a good ap­prox­i­ma­tion of what I name when I say “moral­ity”, and the other one is definitely not.

But the cynic might just say again, “So what? That’s what you say. The Mug­ger could just say the op­po­site. What mean­ing is there in such claims? What differ­ence does it make?”

So I came up with this re­ply: “Sup­pose that I hap­pen along this mug­ging. I will de­cide to side with the Muggee, not the Mug­ger, be­cause I have the no­tion that the Mug­ger is in­terfer­ing with the Muggee’s in­di­vi­d­ual_right to keep his wallet, rather than the Muggee in­terfer­ing with the Mug­ger’s in­di­vi­d­ual_right to take it. And if a fourth per­son comes along, and must de­cide whether to al­low my in­ter­ven­tion, or al­ter­na­tively stop me from treat­ing on the Mug­ger’s in­di­vi­d­ual_right to take the wallet, then they are likely to side with the idea that I can in­ter­vene against the Mug­ger, in sup­port of the Muggee.”

Now this does not work as a metaethics; it does not work to define the word should. If you fell back­ward in time, to an era when no one on Earth thought that slav­ery was wrong, you should still help slaves es­cape their own­ers. In­deed, the era when such an act was done in heroic defi­ance of so­ciety and the law, was not so very long ago.

But to defend the no­tion of in­di­vi­d­ual_rights against the charge of mean­ingless­ness, the no­tion of third-party in­ter­ven­tions and fourth-party al­lowances of those in­ter­ven­tions, seems to me to co­her­ently cash out what is as­serted when we as­sert that an in­di­vi­d­ual_right ex­ists. To as­sert that some­one has a right to keep their wallet, is to as­sert that third par­ties should help them keep it, and that fourth par­ties should ap­plaud those who thus help.

This per­spec­tive does make a good deal of what is said about in­di­vi­d­ual_rights into non­sense. “Every­one has a right to be free from star­va­tion!” Um, who are you talk­ing to? Na­ture? Per­haps you mean, “If you’re starv­ing, and some­one else has a ham­burger, I’ll help you take it.” If so, you should say so clearly. (See also The Death of Com­mon Sense.)

So that is a no­tion of in­di­vi­d­ual_rights, but what does it have to do with the more gen­eral ques­tion of in­ter­per­sonal moral­ity?

The no­tion is that you can con­struct in­ter­per­sonal moral­ity out of in­di­vi­d­ual moral­ity. Just as, in this par­tic­u­lar ex­am­ple, I con­structed the no­tion of what is as­serted by talk­ing about an in­di­vi­d­ual_right, by mak­ing it an as­ser­tion about whether third par­ties should de­cide, for them­selves, to in­tefere; and whether fourth par­ties should, in­di­vi­d­u­ally, de­cide to ap­plaud the in­terfer­ence.

Why go to such lengths to define things in in­di­vi­d­ual terms? Some peo­ple might say: “To as­sert the ex­is­tence of a right, is to say what so­ciety should do.”

But so­cieties don’t always agree on things. And then you, as an in­di­vi­d­ual, will have to de­cide what’s right for you to do, in that case.

“But in­di­vi­d­u­als don’t always agree within them­selves, ei­ther,” you say. “They have emo­tional con­flicts.”

Well… you could say that and it would sound wise. But gen­er­ally speak­ing, neu­rolog­i­cally in­tact hu­mans will end up do­ing some par­tic­u­lar thing. As op­posed to flop­ping around on the floor as their limbs twitch in differ­ent di­rec­tions un­der the tem­po­rary con­trol of differ­ent per­son­al­ities. Con­trast to a gov­ern­ment or a cor­po­ra­tion.

A hu­man brain is a co­her­ently adapted sys­tem whose parts have been to­gether op­ti­mized for a com­mon crite­rion of fit­ness (more or less). A group is not func­tion­ally op­ti­mized as a group. (You can ver­ify this very quickly by look­ing at the sex ra­tios in a ma­ter­nity hos­pi­tal.) In­di­vi­d­u­als may be op­ti­mized to do well out of their col­lec­tive in­ter­ac­tion—but that is quite a differ­ent se­lec­tion pres­sure, the adap­ta­tions for which do not always pro­duce group agree­ment! So if you want to look at a co­her­ent de­ci­sion sys­tem, it re­ally is a good idea to look at one hu­man, rather than a bu­reau­cracy.

I my­self am one per­son—ad­mit­tedly with a long trail of hu­man his­tory be­hind me that makes me what I am, maybe more than any thoughts I ever thought my­self. But still, at the end of the day, I am writ­ing this blog post; it is not the ne­go­ti­ated out­put of a con­sor­tium. It is quite easy for me to imag­ine be­ing faced, as an in­di­vi­d­ual, with a case where the lo­cal group does not agree within it­self—and in such a case I must de­cide, as an in­di­vi­d­ual, what is right. In gen­eral I must de­cide what is right! If I go along with the group that does not ab­solve me of re­spon­si­bil­ity. If there are any coun­tries that think differ­ently, they can write their own blog posts.

This per­spec­tive, which does not ex­hibit un­defined be­hav­ior in the event of a group dis­agree­ment, is one rea­son why I tend to treat in­ter­per­sonal moral­ity as a spe­cial case of in­di­vi­d­ual moral­ity, and not the other way around.

Now, with that said, in­ter­per­sonal moral­ity is a highly dis­t­in­guish­able spe­cial case of moral­ity.

As hu­mans, we don’t just hunt in groups, we ar­gue in groups. We’ve prob­a­bly been ar­gu­ing lin­guis­ti­cally in adap­tive poli­ti­cal con­texts for long enough—hun­dreds of thou­sands of years, maybe mil­lions—to have adapted speci­fi­cally to that se­lec­tion pres­sure.

So it shouldn’t be all that sur­pris­ing if we have moral in­tu­itions, like fair­ness, that ap­ply speci­fi­cally to the moral­ity of groups.

One of these in­tu­itions seems to be uni­ver­sal­iz­abil­ity.

If Den­nis just strides around say­ing, “I want the whole pie! Give me the whole pie! What’s fair is for me to get the whole pie! Not you, me!” then that’s not go­ing to per­suade any­one else in the tribe. Den­nis has not man­aged to frame his de­sires in a form which en­able them to leap from one mind to an­other. His de­sires will not take wings and be­come in­ter­per­sonal. He is not likely to leave many offspring.

Now, the evolu­tion of in­ter­per­sonal moral in­tu­itions, is a topic which (he said, smil­ing grimly) de­serves its own blog post. And its own aca­demic sub­field. (Any­thing out there be­sides The Evolu­tion­ary Ori­gins of Mo­ral­ity? It seemed to me very ba­sic.)

But I do think it worth not­ing that, rather than try­ing to ma­nipu­late 2-per­son and 3-per­son and 7-per­son in­ter­ac­tions, some of our moral in­stincts seem to have made the leap to N-per­son in­ter­ac­tions. We just think about gen­eral moral ar­gu­ments. As though the val­ues that leap from mind to mind, take on a life of their own and be­come some­thing that you can rea­son about. To the ex­tent that ev­ery­one in your en­vi­ron­ment does share some val­ues, this will work as adap­tive cog­ni­tion. This cre­ates moral in­tu­itions that are not just in­ter­per­sonal but transper­sonal.

Transper­sonal moral in­tu­itions are not nec­es­sar­ily false-to-fact, so long as you don’t ex­pect your ar­gu­ments cast in “uni­ver­sal” terms to sway a rock. There re­ally is such a thing as the psy­cholog­i­cal unity of hu­mankind. Read a moral­ity tale from an en­tirely differ­ent cul­ture; I bet you can figure out what it’s try­ing to ar­gue for, even if you don’t agree with it.

The prob­lem arises when you try to ap­ply the uni­ver­sal­iz­abil­ity in­stinct to say, “If this ar­gu­ment could not per­suade an UnFriendly AI that tries to max­i­mize the num­ber of pa­per­clips in the uni­verse, then it must not be a good ar­gu­ment.”

There are No Univer­sally Com­pel­ling Ar­gu­ments, so if you try to ap­ply the uni­ver­sal­iz­abil­ity in­stinct uni­ver­sally, you end up with no moral­ity. Not even uni­ver­sal­iz­abil­ity; the pa­per­clip max­i­mizer has no in­tu­ition of uni­ver­sal­iz­abil­ity. It just chooses that ac­tion which leads to a fu­ture con­tain­ing the max­i­mum num­ber of pa­per­clips.

There are some things you just can’t have a moral con­ver­sa­tion with. There is not that within them that could re­spond to your ar­gu­ments. You should think twice and maybe three times be­fore ever say­ing this about one of your fel­low hu­mans—but a pa­per­clip max­i­mizer is an­other mat­ter. You’ll just have to over­ride your moral in­stinct to re­gard any­thing la­beled a “mind” as a lit­tle float­ing ghost-in-the-ma­chine, with a hid­den core of perfect empti­ness, which could surely be per­suaded to re­ject its mis­taken source code if you just came up with the right ar­gu­ment. If you’re go­ing to pre­serve uni­ver­sal­iz­abil­ity as an in­tu­ition, you can try ex­tend­ing it to all hu­mans; but you can’t ex­tend it to rocks or chat­bots, nor even pow­er­ful op­ti­miza­tion pro­cesses like evolu­tions or pa­per­clip max­i­miz­ers.

The ques­tion of how much in-prin­ci­ple agree­ment would ex­ist among hu­man be­ings about the transper­sonal por­tion of their val­ues, given perfect knowl­edge of the facts and per­haps a much wider search of the ar­gu­ment space, is not a mat­ter on which we can get much ev­i­dence by ob­serv­ing the prevalence of moral agree­ment and dis­agree­ment in to­day’s world. Any dis­agree­ment might be some­thing that the truth could de­stroyde­pen­dent on a differ­ent view of how the world is, or maybe just de­pen­dent on hav­ing not yet heard the right ar­gu­ment. It is also pos­si­ble that know­ing more could dis­pel illu­sions of moral agree­ment, not just pro­duce new ac­cords.

But does that ques­tion re­ally make much differ­ence in day-to-day moral rea­son­ing, if you’re not try­ing to build a Friendly AI?

Part of The Me­taethics Sequence

Next post: “Mo­ral­ity as Fixed Com­pu­ta­tion

Pre­vi­ous post: “The Mean­ing of Right