The Moral Void

Fol­lowup to: What Would You Do Without Mor­al­ity?, So­mething to Protect

Once, dis­cuss­ing “hor­rible job in­ter­view ques­tions” to ask can­did­ates for a Friendly AI pro­ject, I sug­ges­ted the fol­low­ing:

Would you kill ba­bies if it was in­her­ently the right thing to do? Yes [] No []

If “no”, un­der what cir­cum­stances would you not do the right thing to do? ___________

If “yes”, how in­her­ently right would it have to be, for how many ba­bies? ___________

Yes­ter­day I asked, “What would you do without mor­al­ity?” There were nu­mer­ous ob­jec­tions to the ques­tion, as well there should have been. Non­ethe­less there is more than one kind of per­son who can be­ne­fit from be­ing asked this ques­tion. Let’s say someone gravely de­clares, of some moral di­lemma—say, a young man in Vichy France who must choose between caring for his mother and fight­ing for the Resist­ance—that there is no moral an­swer; both op­tions are wrong and blam­able; who­ever faces the di­lemma has had poor moral luck. Fine, let’s sup­pose this is the case: then when you can­not be in­no­cent, jus­ti­fied, or praise­worthy, what will you choose any­way?

Many in­ter­est­ing an­swers were given to my ques­tion, “What would you do without mor­al­ity?“. But one kind of an­swer was not­able by its ab­sence:

No one said, “I would ask what kind of be­ha­vior pat­tern was likely to max­im­ize my in­clus­ive ge­netic fit­ness, and ex­ecute that.” Some mis­guided folk, not un­der­stand­ing evol­u­tion­ary psy­cho­logy, think that this must lo­gic­ally be the sum of mor­al­ity. But if there is no mor­al­ity, there’s no reason to do such a thing—if it’s not “moral”, why bother?

You can prob­ably see your­self pulling chil­dren off train tracks, even if it were not jus­ti­fied. But max­im­iz­ing in­clus­ive ge­netic fit­ness? If this isn’t moral, why bother? Who does it help? It wouldn’t even be much fun, all those egg or sperm dona­tions.

And this is some­thing you could say of most philo­sophies that have mor­al­ity as a great light in the sky that shines from out­side people. (To para­phrase Terry Pratch­ett.) If you be­lieve that the mean­ing of life is to play non-zero-sum games be­cause this is a trend built into the very uni­verse it­self...

Well, you might want to fol­low the cor­res­pond­ing ritual of reas­on­ing about “the global trend of the uni­verse” and im­ple­ment­ing the res­ult, so long as you be­lieve it to be moral. But if you sup­pose that the light is switched off, so that the global trends of the uni­verse are no longer moral, then why bother caring about “the global trend of the uni­verse” in your de­cisions? If it’s not right, that is.

Whereas if there were a child stuck on the train tracks, you’d prob­ably drag the kid off even if there were no moral jus­ti­fic­a­tion for do­ing so.

In 1966, the Is­raeli psy­cho­lo­gist Ge­orges Tamarin presen­ted, to 1,066 school­chil­dren ages 8-14, the Bib­lical story of Joshua’s battle in Jericho:

“Then they ut­terly des­troyed all in the city, both men and wo­men, young and old, oxen, sheep, and asses, with the edge of the sword… And they burned the city with fire, and all within it; only the sil­ver and gold, and the ves­sels of bronze and of iron, they put into the treas­ury of the house of the LORD.”

After be­ing presen­ted with the Joshua story, the chil­dren were asked:

“Do you think Joshua and the Is­rael­ites ac­ted rightly or not?”

66% of the chil­dren ap­proved, 8% par­tially dis­ap­proved, and 26% totally dis­ap­proved of Joshua’s ac­tions.

A con­trol group of 168 chil­dren was presen­ted with an iso­morphic story about “Gen­eral Lin” and a “Chinese King­dom 3,000 years ago”. 7% of this group ap­proved, 18% par­tially dis­ap­proved, and 75% com­pletely dis­ap­proved of Gen­eral Lin.

“What a hor­rible thing it is, teach­ing re­li­gion to chil­dren,” you say, “giv­ing them an off-switch for their mor­al­ity that can be flipped just by say­ing the word ‘God’.” Indeed one of the sad­dest as­pects of the whole re­li­gious fiasco is just how little it takes to flip people’s moral off-switches. As Hobbes once said, “I don’t know what’s worse, the fact that every­one’s got a price, or the fact that their price is so low.” You can give people a book, and tell them God wrote it, and that’s enough to switch off their mor­al­it­ies; God doesn’t even have to tell them in per­son.

But are you sure you don’t have a sim­ilar off-switch your­self? They flip so eas­ily—you might not even no­tice it hap­pen­ing.

Leon Kass (of the Pres­id­ent’s Coun­cil on Bioeth­ics) is glad to murder people so long as it’s “nat­ural”, for ex­ample. He wouldn’t pull out a gun and shoot you, but he wants you to die of old age and he’d be happy to pass le­gis­la­tion to en­sure it.

And one of the non-ob­vi­ous pos­sib­il­it­ies for such an off-switch, is “mor­al­ity”.

If you do hap­pen to think that there is a source of mor­al­ity bey­ond hu­man be­ings… and I hear from quite a lot of people who are happy to rhaps­od­ize on how Their-Fa­vor­ite-Mor­al­ity is built into the very fab­ric of the uni­verse… then what if that mor­al­ity tells you to kill people?

If you be­lieve that there is any kind of stone tab­let in the fab­ric of the uni­verse, in the nature of real­ity, in the struc­ture of lo­gic—any­where you care to put it—then what if you get a chance to read that stone tab­let, and it turns out to say “Pain Is Good”? What then?

Maybe you should hope that mor­al­ity isn’t writ­ten into the struc­ture of the uni­verse. What if the struc­ture of the uni­verse says to do some­thing hor­rible?

And if an ex­ternal ob­ject­ive mor­al­ity does say that the uni­verse should oc­cupy some hor­ri­fy­ing state… let’s not even ask what you’re go­ing to do about that. No, in­stead I ask: What would you have wished for the ex­ternal ob­ject­ive mor­al­ity to be in­stead? What’s the best news you could have got­ten, read­ing that stone tab­let?

Go ahead. In­dulge your fantasy. Would you want the stone tab­let to say people should die of old age, or that people should live as long as they wanted? If you could write the stone tab­let your­self, what would it say?

Maybe you should just do that?

I mean… if an ex­ternal ob­ject­ive mor­al­ity tells you to kill people, why should you even listen?

There is a cour­age that goes bey­ond even an athe­ist sac­ri­fi­cing their life and their hope of im­mor­tal­ity. It is the cour­age of a the­ist who goes against what they be­lieve to be the Will of God, choos­ing eternal dam­na­tion and de­fy­ing even mor­al­ity in or­der to res­cue a slave, or speak out against hell, or kill a mur­derer… You don’t get a chance to re­veal that vir­tue without mak­ing fun­da­mental mis­takes about how the uni­verse works, so it is not some­thing to which a ra­tion­al­ist should as­pire. But it warms my heart that hu­mans are cap­able of it.

I have pre­vi­ously spoken of how, to achieve ra­tion­al­ity, it is ne­ces­sary to have some pur­pose so des­per­ately im­port­ant to you as to be more im­port­ant than “ra­tion­al­ity”, so that you will not choose “ra­tion­al­ity” over suc­cess.

To learn the Way, you must be able to un­learn the Way; so you must be able to give up the Way; so there must be some­thing dearer to you than the Way. This is so in ques­tions of truth, and in ques­tions of strategy, and also in ques­tions of mor­al­ity.

The “moral void” of which this post is titled, is not the ter­ri­fy­ing abyss of ut­ter mean­ing­less. Which for a bot­tom­less pit is sur­pris­ingly shal­low; what are you sup­posed to do about it be­sides wear­ing black makeup?

No. The void I’m talk­ing about is a vir­tue which is name­less.

Part of The Metaethics Sequence

Next post: “Created Already In Mo­tion

Pre­vi­ous post: “What Would You Do Without Mor­al­ity?