A recent conversation with Michael Vassar touched on—or to be more accurate, he patiently explained to me—the psychology of at least three (3) different types of people known to him, who are evil and think of themselves as “evil”. In ascending order of frequency:
The first type was someone who, having concluded that God does not exist, concludes that one should do all the things that God is said to dislike. (Apparently such folk actually exist.)
The third type was someone who thinks of “morality” only as a burden—all the things your parents say you can’t do—and who rebels by deliberately doing those things.
The second type was a whole ’nother story, so I’m skipping it for now.
This reminded me of a topic I needed to post on:
Beware of placing goodness outside.
This specializes to e.g. my belief that ethicists should be inside rather than outside a profession: that it is futile to have “bioethicists” not working in biotech, or futile to think you can study Friendly AI without needing to think technically about AI.
But the deeper sense of “not placing goodness outside” was something I first learned at age ~15 from the celebrity logician Raymond Smullyan, in his book The Tao Is Silent, my first introduction to (heavily Westernized) Eastern thought.
Michael Vassar doesn’t like this book. Maybe because most of the statements in it are patently false?
But The Tao Is Silent still has a warm place reserved in my heart, for it was here that I first encountered such ideas as:
Do you think of altruism as sacrificing one’s own happiness for the sake of others, or as gaining one’s happiness through the happiness of others?
(I would respond, by the way, that an “altruist” is someone who chooses between actions according to the criterion of others’ welfare.)
A key chapter in The Tao Is Silent can be found online: “Taoism versus Morality”. This chapter is medium-long (say, 3-4 Eliezer OB posts) but it should convey what I mean, when I say that this book manages to be quite charming, even though most of the statements in it are false.
Here is one key passage:
TAOIST: I think the word “humane” is central to our entire problem. You are pushing morality. I am encouraging humanity. You are emphasizing “right and wrong,” I am emphasizing the value of natural love. I do not assert that it is logically impossible for a person to be both moralistic and humane, but I have yet to meet one who is! I don’t believe in fact that there are any. My whole life experience has clearly shown me that the two are inversely related to an extraordinary degree. I have never yet met a moralist who is a really kind person. I have never met a truly kind and humane person who is a moralist. And no wonder! Morality and humaneness are completely antithetical in spirit.
MORALIST: I’m not sure that I really understand your use of the word “humane,” and above all, I am totally puzzled as to why you should regard it as antithetical to morality.
TAOIST: A humane person is one who is simply kind, sympathetic, and loving. He does not believe that he SHOULD be so, or that it is his “duty” to be so; he just simply is. He treats his neighbor well not because it is the “right thing to do,” but because he feels like it. He feels like it out of sympathy or empathy—out of simple human feeling. So if a person is humane, what does he need morality for? Why should a person be told that he should do something which he wants to do anyway?
MORALIST: Oh, I see what you’re talking about; you’re talking about saints! Of course, in a world full of saints, moralists would no longer be needed—any more than doctors would be needed in a world full of healthy people. But the unfortunate reality is that the world is not full of saints. Of everybody were what you call “humane,” things would be fine. But most people are fundamentally not so nice. They don’t love their neighbor; at the first opporunity they will explot their neighbor for their own selfish ends. That’s why we moralists are necessary to keep them in check.
TAOIST: To keep them in check! How perfectly said! And do you succeed in keeping them in check?
MORALIST: I don’t say that we always succeed, but we try our best. After all, you can’t blame a doctor for failing to keep a plague in check if he conscientiously does everything he can. We moralists are not gods, and we cannot guarantee our efforts will succeed. All we can do is tell people they SHOULD be more humane, we can’t force them to. After all, people have free wills.
TAOIST: And it has never once occurred to you that what in fact you are doing is making people less humane rather than more humane?
MORALIST: Of course not, what a horrible thing to say! Don’t we explicitly tell people that they should be MORE humane?
TAOIST: Exactly! And that is precisely the trouble. What makes you think that telling one that one should be humane or that it is one’s “duty” to be humane is likely to influence one to be more humane? It seems to me, it would tend to have the opposite effect. What you are trying to do is to command love. And love, like a precious flower, will only wither at any attempt to force it. My whole criticism of you is to the effect that you are trying to force that which can thrive only if it is not forced. That’s what I mean when I say that you moralists are creating the very problems about which you complain.
MORALIST: No, no, you don’t understand! I am not commanding people to love each other. I know as well as you do that love cannot be commanded. I realize it would be a beautiful world if everyone loved one another so much that morality would not be necessary at all, but the hard facts of life are that we don’t live in such a world. Therefore morality is necessary. But I am not commanding one to love one’s neighbor—I know that is impossible. What I command is: even though you don’t love your neighbor all that much, it is your duty to treat him right anyhow. I am a realist.
TAOIST: And I say you are not a realist. I say that right treatment or fairness or truthfulness or duty or obligation can no more be successfully commanded than love.
Or as Lao-Tse said: “Give up all this advertising of goodness and duty, and people will regain love of their fellows.”
As an empirical proposition, the idea that human nature begins as pure sweetness and light and is then tainted by the environment, is flat wrong. I don’t believe that a world in which morality was never spoken of, would overflow with kindness.
But it is often much easier to point out where someone else is wrong, than to be right yourself. Smullyan’s criticism of Western morality—especially Christian morality, which he focuses on—does hit the mark, I think.
It is very common to find a view of morality as something external, a burden of duty, a threat of punishment, an inconvenient thing that constrains you against your own desires; something from outside.
Though I don’t recall the bibliography off the top of my head, there’s been more than one study demonstrating that children who are told to, say, avoid playing with a car, and offered a cookie if they refrain, will go ahead and play with the car when they think no one is watching, or if no cookie is offered. If no reward or punishment is offered, and the child is simply told not to play with the car, the child will refrain even if no adult is around. So much for the positive influence of “God is watching you” on morals. I don’t know if any direct studies have been done on the question; but extrapolating from existing knowledge, you would expect childhood religious belief to interfere with the process of internalizing morality. (If there were actually a God, you wouldn’t want to tell the kids about it until they’d grown up, considering how human nature seems to work in the laboratory.)
Human nature is not inherent sweetness and light. But if evil is not something that comes from outside, then neither is morality external. It’s not as if we got it from God.
I won’t say that you ought to adopt a view of goodness that’s more internal. I won’t tell you that you have a duty to do it. But if you see morality as something that’s outside yourself, then I think you’ve gone down a garden path; and I hope that, in coming to see this, you will retrace your footsteps.
Take a good look in the mirror, and ask yourself: Would I rather that people be happy, than sad?
If the answer is “Yes”, you really have no call to blame anyone else for your altruism; you’re just a good person, that’s all.
But what if the answer is: “Not really—I don’t care much about other people.”
Then I ask: Does answering this way, make you sad? Do you wish that you could answer differently?
If so, then this sadness again originates in you, and it would be futile to attribute it to anything not-you.
But suppose the one even says: “Actually, I actively dislike most people I meet and want to hit them with a sockfull of spare change. Only my knowledge that it would be wrong keeps me from acting on my desire.”
Then I would say to look in the mirror and ask yourself who it is that prefers to do the right thing, rather than the wrong thing. And again if the answer is “Me”, then it is pointless to externalize your righteousness.
Albeit if the one says: “I hate everyone else in the world and want to hurt them before they die, and also I have no interest in right or wrong; I am restrained from being a serial killer only out of a cold, calculated fear of punishment”—then, I admit, I have very little to say to them.
Occasionally I meet people who are not serial killers, but who have decided for some reason that they ought to be only selfish, and therefore, should reject their own preference that other people be happy rather than sad. I wish I knew what sort of cognitive history leads into this state of mind. Ayn Rand? Aleister Crowley? How exactly do you get there? What Rubicons do you cross? It’s not the justifications I’m interested in, but the critical moments of thought.
Even the most elementary ideas of Friendly AI cannot be grasped by someone who externalizes morality. They will think of Friendliness as chains imposed to constrain the AI’s own “true” desires; rather than as a shaping (selection from out of a huge space of possibilities) of the AI so that the AI chooses according to certain criteria, “its own desire” as it were. They will object to the idea of founding the AI on human morals in any way, saying, “But humans are such awful creatures,” not realizing that it is only humans who have ever passed such a judgment.
As recounted in Original Teachings of Ch’an Buddhism by Chang Chung-Yuan, and quoted by Smullyan:
One day P’ang Yun, sitting quietly in his temple, made this remark:
“How difficult it is!
How difficult it is!
My studies are like drying the fibers of a thousand pounds
of flax in the sun by hanging them on the trees!”
But his wife responded:
“My way is easy indeed!
I found the teachings of the
Patriarchs right on the tops
of the flowering plants!”
When their daughter overheard this exchange, she sang:
“My study is neither difficult nor easy.
When I am hungry I eat,
When I am tired I rest.”