The Moral Void

Fol­lowup to: What Would You Do Without Mo­ral­ity?, Some­thing to Protect

Once, dis­cussing “hor­rible job in­ter­view ques­tions” to ask can­di­dates for a Friendly AI pro­ject, I sug­gested the fol­low­ing:

Would you kill ba­bies if it was in­her­ently the right thing to do? Yes [] No []

If “no”, un­der what cir­cum­stances would you not do the right thing to do? ___________

If “yes”, how in­her­ently right would it have to be, for how many ba­bies? ___________

Yes­ter­day I asked, “What would you do with­out moral­ity?” There were nu­mer­ous ob­jec­tions to the ques­tion, as well there should have been. Nonethe­less there is more than one kind of per­son who can benefit from be­ing asked this ques­tion. Let’s say some­one gravely de­clares, of some moral dilemma—say, a young man in Vichy France who must choose be­tween car­ing for his mother and fight­ing for the Re­sis­tance—that there is no moral an­swer; both op­tions are wrong and blam­able; who­ever faces the dilemma has had poor moral luck. Fine, let’s sup­pose this is the case: then when you can­not be in­no­cent, jus­tified, or praise­wor­thy, what will you choose any­way?

Many in­ter­est­ing an­swers were given to my ques­tion, “What would you do with­out moral­ity?”. But one kind of an­swer was no­table by its ab­sence:

No one said, “I would ask what kind of be­hav­ior pat­tern was likely to max­i­mize my in­clu­sive ge­netic fit­ness, and ex­e­cute that.” Some mis­guided folk, not un­der­stand­ing evolu­tion­ary psy­chol­ogy, think that this must log­i­cally be the sum of moral­ity. But if there is no moral­ity, there’s no rea­son to do such a thing—if it’s not “moral”, why bother?

You can prob­a­bly see your­self pul­ling chil­dren off train tracks, even if it were not jus­tified. But max­i­miz­ing in­clu­sive ge­netic fit­ness? If this isn’t moral, why bother? Who does it help? It wouldn’t even be much fun, all those egg or sperm dona­tions.

And this is some­thing you could say of most philoso­phies that have moral­ity as a great light in the sky that shines from out­side peo­ple. (To para­phrase Terry Pratch­ett.) If you be­lieve that the mean­ing of life is to play non-zero-sum games be­cause this is a trend built into the very uni­verse it­self...

Well, you might want to fol­low the cor­re­spond­ing rit­ual of rea­son­ing about “the global trend of the uni­verse” and im­ple­ment­ing the re­sult, so long as you be­lieve it to be moral. But if you sup­pose that the light is switched off, so that the global trends of the uni­verse are no longer moral, then why bother car­ing about “the global trend of the uni­verse” in your de­ci­sions? If it’s not right, that is.

Whereas if there were a child stuck on the train tracks, you’d prob­a­bly drag the kid off even if there were no moral jus­tifi­ca­tion for do­ing so.

In 1966, the Is­raeli psy­chol­o­gist Ge­orges Ta­marin pre­sented, to 1,066 schoolchil­dren ages 8-14, the Bibli­cal story of Joshua’s bat­tle in Jer­i­cho:

“Then they ut­terly de­stroyed all in the city, both men and women, young and old, oxen, sheep, and asses, with the edge of the sword… And they burned the city with fire, and all within it; only the silver and gold, and the ves­sels of bronze and of iron, they put into the trea­sury of the house of the LORD.”

After be­ing pre­sented with the Joshua story, the chil­dren were asked:

“Do you think Joshua and the Is­raelites acted rightly or not?”

66% of the chil­dren ap­proved, 8% par­tially dis­ap­proved, and 26% to­tally dis­ap­proved of Joshua’s ac­tions.

A con­trol group of 168 chil­dren was pre­sented with an iso­mor­phic story about “Gen­eral Lin” and a “Chi­nese King­dom 3,000 years ago”. 7% of this group ap­proved, 18% par­tially dis­ap­proved, and 75% com­pletely dis­ap­proved of Gen­eral Lin.

“What a hor­rible thing it is, teach­ing re­li­gion to chil­dren,” you say, “giv­ing them an off-switch for their moral­ity that can be flipped just by say­ing the word ‘God’.” In­deed one of the sad­dest as­pects of the whole re­li­gious fi­asco is just how lit­tle it takes to flip peo­ple’s moral off-switches. As Hobbes once said, “I don’t know what’s worse, the fact that ev­ery­one’s got a price, or the fact that their price is so low.” You can give peo­ple a book, and tell them God wrote it, and that’s enough to switch off their moral­ities; God doesn’t even have to tell them in per­son.

But are you sure you don’t have a similar off-switch your­self? They flip so eas­ily—you might not even no­tice it hap­pen­ing.

Leon Kass (of the Pres­i­dent’s Coun­cil on Bioethics) is glad to mur­der peo­ple so long as it’s “nat­u­ral”, for ex­am­ple. He wouldn’t pull out a gun and shoot you, but he wants you to die of old age and he’d be happy to pass leg­is­la­tion to en­sure it.

And one of the non-ob­vi­ous pos­si­bil­ities for such an off-switch, is “moral­ity”.

If you do hap­pen to think that there is a source of moral­ity be­yond hu­man be­ings… and I hear from quite a lot of peo­ple who are happy to rhap­sodize on how Their-Fa­vorite-Mo­ral­ity is built into the very fabric of the uni­verse… then what if that moral­ity tells you to kill peo­ple?

If you be­lieve that there is any kind of stone tablet in the fabric of the uni­verse, in the na­ture of re­al­ity, in the struc­ture of logic—any­where you care to put it—then what if you get a chance to read that stone tablet, and it turns out to say “Pain Is Good”? What then?

Maybe you should hope that moral­ity isn’t writ­ten into the struc­ture of the uni­verse. What if the struc­ture of the uni­verse says to do some­thing hor­rible?

And if an ex­ter­nal ob­jec­tive moral­ity does say that the uni­verse should oc­cupy some hor­rify­ing state… let’s not even ask what you’re go­ing to do about that. No, in­stead I ask: What would you have wished for the ex­ter­nal ob­jec­tive moral­ity to be in­stead? What’s the best news you could have got­ten, read­ing that stone tablet?

Go ahead. In­dulge your fan­tasy. Would you want the stone tablet to say peo­ple should die of old age, or that peo­ple should live as long as they wanted? If you could write the stone tablet your­self, what would it say?

Maybe you should just do that?

I mean… if an ex­ter­nal ob­jec­tive moral­ity tells you to kill peo­ple, why should you even listen?

There is a courage that goes be­yond even an athe­ist sac­ri­fic­ing their life and their hope of im­mor­tal­ity. It is the courage of a the­ist who goes against what they be­lieve to be the Will of God, choos­ing eter­nal damna­tion and defy­ing even moral­ity in or­der to res­cue a slave, or speak out against hell, or kill a mur­derer… You don’t get a chance to re­veal that virtue with­out mak­ing fun­da­men­tal mis­takes about how the uni­verse works, so it is not some­thing to which a ra­tio­nal­ist should as­pire. But it warms my heart that hu­mans are ca­pa­ble of it.

I have pre­vi­ously spo­ken of how, to achieve ra­tio­nal­ity, it is nec­es­sary to have some pur­pose so des­per­ately im­por­tant to you as to be more im­por­tant than “ra­tio­nal­ity”, so that you will not choose “ra­tio­nal­ity” over suc­cess.

To learn the Way, you must be able to un­learn the Way; so you must be able to give up the Way; so there must be some­thing dearer to you than the Way. This is so in ques­tions of truth, and in ques­tions of strat­egy, and also in ques­tions of moral­ity.

The “moral void” of which this post is ti­tled, is not the ter­rify­ing abyss of ut­ter mean­ingless. Which for a bot­tom­less pit is sur­pris­ingly shal­low; what are you sup­posed to do about it be­sides wear­ing black makeup?

No. The void I’m talk­ing about is a virtue which is name­less.

Part of The Me­taethics Sequence

Next post: “Created Already In Mo­tion

Pre­vi­ous post: “What Would You Do Without Mo­ral­ity?