Inner Goodness

Fol­lowup to: Which Parts Are “Me”?, Effortless Technique

A re­cent con­ver­sa­tion with Michael Vas­sar touched on—or to be more ac­cu­rate, he pa­tiently ex­plained to me—the psy­chol­ogy of at least three (3) differ­ent types of peo­ple known to him, who are evil and think of them­selves as “evil”. In as­cend­ing or­der of fre­quency:

The first type was some­one who, hav­ing con­cluded that God does not ex­ist, con­cludes that one should do all the things that God is said to dis­like. (Ap­par­ently such folk ac­tu­ally ex­ist.)

The third type was some­one who thinks of “moral­ity” only as a bur­den—all the things your par­ents say you can’t do—and who rebels by de­liber­ately do­ing those things.

The sec­ond type was a whole ’nother story, so I’m skip­ping it for now.

This re­minded me of a topic I needed to post on:

Be­ware of plac­ing good­ness out­side.

This spe­cial­izes to e.g. my be­lief that ethi­cists should be in­side rather than out­side a pro­fes­sion: that it is fu­tile to have “bioethi­cists” not work­ing in biotech, or fu­tile to think you can study Friendly AI with­out need­ing to think tech­ni­cally about AI.

But the deeper sense of “not plac­ing good­ness out­side” was some­thing I first learned at age ~15 from the celebrity lo­gi­cian Ray­mond Smul­lyan, in his book The Tao Is Silent, my first in­tro­duc­tion to (heav­ily West­ern­ized) Eastern thought.

Michael Vas­sar doesn’t like this book. Maybe be­cause most of the state­ments in it are patently false?

But The Tao Is Silent still has a warm place re­served in my heart, for it was here that I first en­coun­tered such ideas as:

Do you think of al­tru­ism as sac­ri­fic­ing one’s own hap­piness for the sake of oth­ers, or as gain­ing one’s hap­piness through the hap­piness of oth­ers?

(I would re­spond, by the way, that an “al­tru­ist” is some­one who chooses be­tween ac­tions ac­cord­ing to the crite­rion of oth­ers’ welfare.)

A key chap­ter in The Tao Is Silent can be found on­line: “Tao­ism ver­sus Mo­ral­ity”. This chap­ter is medium-long (say, 3-4 Eliezer OB posts) but it should con­vey what I mean, when I say that this book man­ages to be quite charm­ing, even though most of the state­ments in it are false.

Here is one key pas­sage:

TAOIST: I think the word “hu­mane” is cen­tral to our en­tire prob­lem. You are push­ing moral­ity. I am en­courag­ing hu­man­ity. You are em­pha­siz­ing “right and wrong,” I am em­pha­siz­ing the value of nat­u­ral love. I do not as­sert that it is log­i­cally im­pos­si­ble for a per­son to be both moral­is­tic and hu­mane, but I have yet to meet one who is! I don’t be­lieve in fact that there are any. My whole life ex­pe­rience has clearly shown me that the two are in­versely re­lated to an ex­traor­di­nary de­gree. I have never yet met a moral­ist who is a re­ally kind per­son. I have never met a truly kind and hu­mane per­son who is a moral­ist. And no won­der! Mo­ral­ity and hu­mane­ness are com­pletely an­ti­thet­i­cal in spirit.

MORALIST: I’m not sure that I re­ally un­der­stand your use of the word “hu­mane,” and above all, I am to­tally puz­zled as to why you should re­gard it as an­ti­thet­i­cal to moral­ity.

TAOIST: A hu­mane per­son is one who is sim­ply kind, sym­pa­thetic, and lov­ing. He does not be­lieve that he SHOULD be so, or that it is his “duty” to be so; he just sim­ply is. He treats his neigh­bor well not be­cause it is the “right thing to do,” but be­cause he feels like it. He feels like it out of sym­pa­thy or em­pa­thy—out of sim­ple hu­man feel­ing. So if a per­son is hu­mane, what does he need moral­ity for? Why should a per­son be told that he should do some­thing which he wants to do any­way?

MORALIST: Oh, I see what you’re talk­ing about; you’re talk­ing about saints! Of course, in a world full of saints, moral­ists would no longer be needed—any more than doc­tors would be needed in a world full of healthy peo­ple. But the un­for­tu­nate re­al­ity is that the world is not full of saints. Of ev­ery­body were what you call “hu­mane,” things would be fine. But most peo­ple are fun­da­men­tally not so nice. They don’t love their neigh­bor; at the first op­porunity they will ex­plot their neigh­bor for their own self­ish ends. That’s why we moral­ists are nec­es­sary to keep them in check.

TAOIST: To keep them in check! How perfectly said! And do you suc­ceed in keep­ing them in check?

MORALIST: I don’t say that we always suc­ceed, but we try our best. After all, you can’t blame a doc­tor for failing to keep a plague in check if he con­scien­tiously does ev­ery­thing he can. We moral­ists are not gods, and we can­not guaran­tee our efforts will suc­ceed. All we can do is tell peo­ple they SHOULD be more hu­mane, we can’t force them to. After all, peo­ple have free wills.

TAOIST: And it has never once oc­curred to you that what in fact you are do­ing is mak­ing peo­ple less hu­mane rather than more hu­mane?

MORALIST: Of course not, what a hor­rible thing to say! Don’t we ex­plic­itly tell peo­ple that they should be MORE hu­mane?

TAOIST: Ex­actly! And that is pre­cisely the trou­ble. What makes you think that tel­ling one that one should be hu­mane or that it is one’s “duty” to be hu­mane is likely to in­fluence one to be more hu­mane? It seems to me, it would tend to have the op­po­site effect. What you are try­ing to do is to com­mand love. And love, like a pre­cious flower, will only wither at any at­tempt to force it. My whole crit­i­cism of you is to the effect that you are try­ing to force that which can thrive only if it is not forced. That’s what I mean when I say that you moral­ists are cre­at­ing the very prob­lems about which you com­plain.

MORALIST: No, no, you don’t un­der­stand! I am not com­mand­ing peo­ple to love each other. I know as well as you do that love can­not be com­manded. I re­al­ize it would be a beau­tiful world if ev­ery­one loved one an­other so much that moral­ity would not be nec­es­sary at all, but the hard facts of life are that we don’t live in such a world. There­fore moral­ity is nec­es­sary. But I am not com­mand­ing one to love one’s neigh­bor—I know that is im­pos­si­ble. What I com­mand is: even though you don’t love your neigh­bor all that much, it is your duty to treat him right any­how. I am a re­al­ist.

TAOIST: And I say you are not a re­al­ist. I say that right treat­ment or fair­ness or truth­ful­ness or duty or obli­ga­tion can no more be suc­cess­fully com­manded than love.

Or as Lao-Tse said: “Give up all this ad­ver­tis­ing of good­ness and duty, and peo­ple will re­gain love of their fel­lows.”

As an em­piri­cal propo­si­tion, the idea that hu­man na­ture be­gins as pure sweet­ness and light and is then tainted by the en­vi­ron­ment, is flat wrong. I don’t be­lieve that a world in which moral­ity was never spo­ken of, would overflow with kind­ness.

But it is of­ten much eas­ier to point out where some­one else is wrong, than to be right your­self. Smul­lyan’s crit­i­cism of Western moral­ity—es­pe­cially Chris­tian moral­ity, which he fo­cuses on—does hit the mark, I think.

It is very com­mon to find a view of moral­ity as some­thing ex­ter­nal, a bur­den of duty, a threat of pun­ish­ment, an in­con­ve­nient thing that con­strains you against your own de­sires; some­thing from out­side.

Though I don’t re­call the bibliog­ra­phy off the top of my head, there’s been more than one study demon­strat­ing that chil­dren who are told to, say, avoid play­ing with a car, and offered a cookie if they re­frain, will go ahead and play with the car when they think no one is watch­ing, or if no cookie is offered. If no re­ward or pun­ish­ment is offered, and the child is sim­ply told not to play with the car, the child will re­frain even if no adult is around. So much for the pos­i­tive in­fluence of “God is watch­ing you” on morals. I don’t know if any di­rect stud­ies have been done on the ques­tion; but ex­trap­o­lat­ing from ex­ist­ing knowl­edge, you would ex­pect child­hood re­li­gious be­lief to in­terfere with the pro­cess of in­ter­nal­iz­ing moral­ity. (If there were ac­tu­ally a God, you wouldn’t want to tell the kids about it un­til they’d grown up, con­sid­er­ing how hu­man na­ture seems to work in the lab­o­ra­tory.)

Hu­man na­ture is not in­her­ent sweet­ness and light. But if evil is not some­thing that comes from out­side, then nei­ther is moral­ity ex­ter­nal. It’s not as if we got it from God.

I won’t say that you ought to adopt a view of good­ness that’s more in­ter­nal. I won’t tell you that you have a duty to do it. But if you see moral­ity as some­thing that’s out­side your­self, then I think you’ve gone down a gar­den path; and I hope that, in com­ing to see this, you will re­trace your foot­steps.

Take a good look in the mir­ror, and ask your­self: Would I rather that peo­ple be happy, than sad?

If the an­swer is “Yes”, you re­ally have no call to blame any­one else for your al­tru­ism; you’re just a good per­son, that’s all.

But what if the an­swer is: “Not re­ally—I don’t care much about other peo­ple.”

Then I ask: Does an­swer­ing this way, make you sad? Do you wish that you could an­swer differ­ently?

If so, then this sad­ness again origi­nates in you, and it would be fu­tile to at­tribute it to any­thing not-you.

But sup­pose the one even says: “Ac­tu­ally, I ac­tively dis­like most peo­ple I meet and want to hit them with a sock­full of spare change. Only my knowl­edge that it would be wrong keeps me from act­ing on my de­sire.”

Then I would say to look in the mir­ror and ask your­self who it is that prefers to do the right thing, rather than the wrong thing. And again if the an­swer is “Me”, then it is pointless to ex­ter­nal­ize your righ­teous­ness.

Albeit if the one says: “I hate ev­ery­one else in the world and want to hurt them be­fore they die, and also I have no in­ter­est in right or wrong; I am re­strained from be­ing a se­rial kil­ler only out of a cold, calcu­lated fear of pun­ish­ment”—then, I ad­mit, I have very lit­tle to say to them.

Oc­ca­sion­ally I meet peo­ple who are not se­rial kil­lers, but who have de­cided for some rea­son that they ought to be only self­ish, and there­fore, should re­ject their own prefer­ence that other peo­ple be happy rather than sad. I wish I knew what sort of cog­ni­tive his­tory leads into this state of mind. Ayn Rand? Aleister Crowley? How ex­actly do you get there? What Ru­bi­cons do you cross? It’s not the jus­tifi­ca­tions I’m in­ter­ested in, but the crit­i­cal mo­ments of thought.

Even the most el­e­men­tary ideas of Friendly AI can­not be grasped by some­one who ex­ter­nal­izes moral­ity. They will think of Friendli­ness as chains im­posed to con­strain the AI’s own “true” de­sires; rather than as a shap­ing (se­lec­tion from out of a huge space of pos­si­bil­ities) of the AI so that the AI chooses ac­cord­ing to cer­tain crite­ria, “its own de­sire” as it were. They will ob­ject to the idea of found­ing the AI on hu­man morals in any way, say­ing, “But hu­mans are such awful crea­tures,” not re­al­iz­ing that it is only hu­mans who have ever passed such a judg­ment.

As re­counted in Origi­nal Teach­ings of Ch’an Bud­dhism by Chang Chung-Yuan, and quoted by Smul­lyan:

One day P’ang Yun, sit­ting quietly in his tem­ple, made this re­mark:

“How difficult it is!
How difficult it is!
My stud­ies are like dry­ing the fibers of a thou­sand pounds
of flax in the sun by hang­ing them on the trees!”

But his wife re­sponded:

“My way is easy in­deed!
I found the teach­ings of the
Pa­tri­archs right on the tops
of the flow­er­ing plants!”

When their daugh­ter over­heard this ex­change, she sang:

“My study is nei­ther difficult nor easy.
When I am hun­gry I eat,
When I am tired I rest.”