Is Morality Given?

Con­tinu­a­tion of: Is Mo­ral­ity Prefer­ence?

(Dis­claimer: Nei­ther Sub­han nor Obert rep­re­sent my own po­si­tion on moral­ity; rather they rep­re­sent differ­ent sides of the ques­tions I hope to an­swer.)

Sub­han: “What is this ‘moral­ity’ stuff, if it is not a prefer­ence within you?”

Obert: “I know that my mere wants, don’t change what is right; but I don’t claim to have ab­solute knowl­edge of what is right—”

Sub­han: “You’re not es­cap­ing that eas­ily! How does a uni­verse in which mur­der is wrong, differ from a uni­verse in which mur­der is right? How can you de­tect the differ­ence ex­per­i­men­tally? If the an­swer to that is ‘No’, then how does any hu­man be­ing come to know that mur­der is wrong?”

Obert: “Am I al­lowed to say ‘I don’t know’?”

Sub­han: “No. You be­lieve now that mur­der is wrong. You must be­lieve you already have ev­i­dence and you should be able to pre­sent it now.

Obert: “That’s too strict! It’s like say­ing to a hunter-gath­erer, ‘Why is the sky blue?’ and ex­pect­ing an im­me­di­ate an­swer.”

Sub­han: “No, it’s like say­ing to a hunter-gath­erer: Why do you be­lieve the sky is blue?”

Obert: “Be­cause it seems blue, just as mur­der seems wrong. Just don’t ask me what the sky is, or how I can see it.”

Sub­han: “But—aren’t we dis­cussing the na­ture of moral­ity?”

Obert: “That, I con­fess, is not one of my strong points. I spe­cial­ize in plain old moral­ity. And as a mat­ter of moral­ity, I know that I can’t make mur­der right just by want­ing to kill some­one.”

Sub­han: “But if you wanted to kill some­one, you would say, ‘I know mur­der­ing this guy is right, and I couldn’t make it wrong just by not want­ing to do it.’”

Obert: “Then, if I said that, I would be wrong. That’s com­mon moral sense, right?”

Sub­han: “Argh! It’s difficult to even ar­gue with you, since you won’t tell me ex­actly what you think moral­ity is made of, or where you’re get­ting all these amaz­ing moral truths—”

Obert: “Well, I do re­gret hav­ing to frus­trate you. But it’s more im­por­tant that I act morally, than that I come up with amaz­ing new the­o­ries of the na­ture of moral­ity. I don’t claim that my strong point is in ex­plain­ing the fun­da­men­tal na­ture of moral­ity. Rather, my strong point is com­ing up with the­o­ries of moral­ity that give nor­mal moral an­swers to ques­tions like, ‘If you feel like kil­ling some­one, does that make it right to do so?’ The com­mon-sense an­swer is ‘No’ and I re­ally see no rea­son to adopt a the­ory that makes the an­swer ‘Yes’. Ad­ding up to moral nor­mal­ity—that is my the­ory’s strong point.”

Sub­han: “Okay… look. You say that, if you be­lieved it was right to mur­der some­one, you would be wrong.

Obert: “Yes, of course! And just to cut off any quib­bles, we’ll spec­ify that we’re not talk­ing about go­ing back in time and shoot­ing Stalin, but rather, stalk­ing some in­no­cent by­stan­der through a dark alley and slit­ting their throat for no other rea­son but my own en­joy­ment. That’s wrong.”

Sub­han: “And any­one who says mur­der is right, is mis­taken.”

Obert: “Yes.”

Sub­han: “Sup­pose there’s an alien species some­where in the vast­ness of the mul­ti­verse, who evolved from car­nivores. In fact, through most of their evolu­tion­ary his­tory, they were can­ni­bals. They’ve evolved differ­ent emo­tions from us, and they have no con­cept that mur­der is wrong—”

Obert: “Why doesn’t their so­ciety fall apart in an orgy of mu­tual kil­ling?”

Sub­han: “That doesn’t mat­ter for our pur­poses of the­o­ret­i­cal metaeth­i­cal in­ves­ti­ga­tion. But since you ask, we’ll sup­pose that the Space Can­ni­bals have a strong sense of honor—they won’t kill some­one they promise not to kill; they have a very strong idea that vi­o­lat­ing an oath is wrong. Their so­ciety holds to­gether on that ba­sis, and on the ba­sis of vengeance con­tracts with pri­vate as­sas­si­na­tion com­pa­nies. But so far as the ac­tual kil­ling is con­cerned, the aliens just think it’s fun. When some­one gets ex­e­cuted for, say, driv­ing through a traf­fic light, there’s a bid­ding war for the rights to per­son­ally tear out the offen­der’s throat.”

Obert: “Okay… where is this go­ing?”

Sub­han: “I’m propos­ing that the Space Can­ni­bals not only have no sense that mur­der is wrong—in­deed, they have a pos­i­tive sense that kil­ling is an im­por­tant part of life—but more­over, there’s no path of ar­gu­ments you could use to per­suade a Space Can­ni­bal of your view that mur­der is wrong. There’s no fact the aliens can learn, and no chain of rea­son­ing they can dis­cover, which will ever cause them to con­clude that mur­der is a moral wrong. Nor is there any way to per­suade them that they should mod­ify them­selves to per­ceive things differ­ently.”

Obert: “I’m not sure I be­lieve that’s pos­si­ble—”

Sub­han: “Then you be­lieve in uni­ver­sally com­pel­ling ar­gu­ments pro­cessed by a ghost in the ma­chine. For ev­ery pos­si­ble mind whose util­ity func­tion as­signs ter­mi­nal value +1, mind de­sign space con­tains an equal and op­po­site mind whose util­ity func­tion as­signs ter­mi­nal value—1. A mind is a phys­i­cal de­vice and you can’t have a lit­tle blue woman pop out of nowhere and make it say 1 when the physics calls for it to say 0.”

Obert: “Sup­pose I were to con­cede this. Then?”

Sub­han: “Then it’s pos­si­ble to have an alien species that be­lieves mur­der is not wrong, and more­over, will con­tinue to be­lieve this given knowl­edge of ev­ery pos­si­ble fact and ev­ery pos­si­ble ar­gu­ment. Can you say these aliens are mis­taken?

Obert: “Maybe it’s the right thing to do in their very differ­ent, alien world—”

Sub­han: “And then they land on Earth and start slit­ting hu­man throats, laugh­ing all the while, be­cause they don’t be­lieve it’s wrong. Are they mis­taken?

Obert: “Yes.”

Sub­han: “Where ex­actly is the mis­take? In which step of rea­son­ing?”

Obert: “I don’t know ex­actly. My guess is that they’ve got a bad ax­iom.”

Sub­han: “Dam­mit! Okay, look. Is it pos­si­ble that—by anal­ogy with the Space Can­ni­bals—there are true moral facts of which the hu­man species is not only presently un­aware, but in­ca­pable of per­ceiv­ing in prin­ci­ple? Could we have been born defec­tive—in­ca­pable even of be­ing com­pel­led by the ar­gu­ments that would lead us to the light? More­over, born with­out any de­sire to mod­ify our­selves to be ca­pa­ble of un­der­stand­ing such ar­gu­ments? Could we be ir­re­vo­ca­bly mis­taken about moral­ity—just like you say the Space Can­ni­bals are?”

Obert: “I… guess so...”

Sub­han: “You guess so? Surely this is an in­evitable con­se­quence of be­liev­ing that moral­ity is a given, in­de­pen­dent of any­one’s prefer­ences! Now, is it pos­si­ble that we, not the Space Can­ni­bals, are the ones who are ir­re­vo­ca­bly mis­taken in be­liev­ing that mur­der is wrong?”

Obert: “That doesn’t seem likely.”

Sub­han: “I’m not ask­ing you if it’s likely, I’m ask­ing you if it’s log­i­cally pos­si­ble! If it’s not pos­si­ble, then you have just con­fessed that hu­man moral­ity is ul­ti­mately de­ter­mined by our hu­man con­sti­tu­tions. And if it is pos­si­ble, then what dis­t­in­guishes this sce­nario of ‘hu­man­ity is ir­re­vo­ca­bly mis­taken about moral­ity’, from find­ing a stone tablet on which is writ­ten the phrase ‘Thou Shalt Mur­der’ with­out any known jus­tifi­ca­tion at­tached? How is a given moral­ity any differ­ent from an un­jus­tified stone tablet?”

Obert: “Slow down. Why does this ar­gu­ment show that moral­ity is de­ter­mined by our own con­sti­tu­tions?”

Sub­han: “Once upon a time, the­olo­gians tried to say that God was the foun­da­tion of moral­ity. And even since the time of the an­cient Greeks, philoso­phers were so­phis­ti­cated enough to go on and ask the next ques­tion—‘Why fol­low God’s com­mands?’ Does God have knowl­edge of moral­ity, so that we should fol­low Its or­ders as good ad­vice? But then what is this moral­ity, out­side God, of which God has knowl­edge? Do God’s com­mands de­ter­mine moral­ity? But then why, morally, should one fol­low God’s or­ders?”

Obert: “Yes, this de­mol­ishes at­tempts to an­swer ques­tions about the na­ture of moral­ity just by say­ing ‘God!’, un­less you an­swer the ob­vi­ous fur­ther ques­tions. But so what?”

Sub­han: “And fur­ther­more, let us cas­ti­gate those who made the ar­gu­ment origi­nally, for the sin of try­ing to cast off re­spon­si­bil­ity—try­ing to wave a scrip­ture and say, ‘I’m just fol­low­ing God’s or­ders!’ Even if God had told them to do a thing, it would still have been their own de­ci­sion to fol­low God’s or­ders.”

Obert: “I agree—as a mat­ter of moral­ity, there is no evad­ing of moral re­spon­si­bil­ity. Even if your par­ents, or your gov­ern­ment, or some kind of hy­po­thet­i­cal su­per­in­tel­li­gence, tells you to do some­thing, you are re­spon­si­ble for your de­ci­sion in do­ing it.”

Sub­han: “But you see, this also de­mol­ishes the idea of any moral­ity that is out­side, be­yond, or above hu­man prefer­ence. Just sub­sti­tute ‘moral­ity’ for ‘God’ in the ar­gu­ment!”

Obert: “What?

Sub­han: “John McCarthy said: ‘You say you couldn’t live if you thought the world had no pur­pose. You’re say­ing that you can’t form pur­poses of your own-that you need some­one to tell you what to do. The av­er­age child has more gump­tion than that.’ For ev­ery kind of stone tablet that you might imag­ine any­where, in the trends of the uni­verse or in the struc­ture of logic, you are still left with the ques­tion: ‘And why obey this moral­ity?’ It would be your de­ci­sion to fol­low this trend of the uni­verse, or obey this struc­ture of logic. Your de­ci­sion—and your prefer­ence.

Obert: “That doesn’t fol­low! Just be­cause it is my de­ci­sion to be moral—and even be­cause there are drives in me that lead me to make that de­ci­sion—it doesn’t fol­low that the moral­ity I fol­low con­sists merely of my prefer­ences. If some­one gives me a pill that makes me pre­fer to not be moral, to com­mit mur­der, then this just al­ters my prefer­ence—but not the moral­ity; mur­der is still wrong. That’s com­mon moral sense—”

Sub­han: “I beat my head against my key­board! What about sci­en­tific com­mon sense? If moral­ity is this mys­te­ri­ous given thing, from be­yond space and time—and I don’t even see why we should fol­low it, in that case—but in any case, if moral­ity ex­ists in­de­pen­dently of hu­man na­ture, then isn’t it a re­mark­able co­in­ci­dence that, say, love is good?”

Obert: “Coin­ci­dence? How so?”

Sub­han: “Just where on Earth do you think the emo­tion of love comes from? If the an­cient Greeks had ever thought of the the­ory of nat­u­ral se­lec­tion, they could have looked at the hu­man in­sti­tu­tion of sex­ual ro­mance, or parental love for that mat­ter, and de­duced in one flash that hu­man be­ings had evolved—or at least de­rived tremen­dous Bayesian ev­i­dence for hu­man evolu­tion. Parental bonds and sex­ual ro­mance clearly dis­play the sig­na­ture of evolu­tion­ary psy­chol­ogy—they’re archety­pal cases, in fact, so ob­vi­ous we usu­ally don’t even see it.”

Obert: “But love isn’t just about re­pro­duc­tion—”

Sub­han: “Of course not; in­di­vi­d­ual or­ganisms are adap­ta­tion-ex­e­cuters, not fit­ness-max­i­miz­ers. But for some­thing in­de­pen­dent of hu­mans, moral­ity looks re­mark­ably like god­shat­ter of nat­u­ral se­lec­tion. In­deed, it is far too much co­in­ci­dence for me to credit. Is hap­piness morally prefer­able to pain? What a co­in­ci­dence! And if you claim that there is any emo­tion, any in­stinc­tive prefer­ence, any com­plex brain cir­cuitry in hu­man­ity which was cre­ated by some ex­ter­nal moral­ity thingy and not nat­u­ral se­lec­tion, then you are in­fring­ing upon sci­ence and you will surely be torn to shreds—sci­ence has never needed to pos­tu­late any­thing but evolu­tion to ex­plain any fea­ture of hu­man psy­chol­ogy—”

Obert: “I’m not say­ing that hu­mans got here by any­thing ex­cept evolu­tion.”

Sub­han: “Then why does moral­ity look so amaz­ingly like a product of an evolved psy­chol­ogy?”

Obert: “I don’t claim perfect ac­cess to moral truth; maybe, be­ing hu­man, I’ve made cer­tain mis­takes about moral­ity—”

Sub­han: “Say that—for­sake love and life and hap­piness, and fol­low some use­less damn trend of the uni­verse or what­ever—and you will lose ev­ery scrap of the moral nor­mal­ity that you once touted as your strong point. And I will be right here, ask­ing, ‘Why even bother?’ It would be a pitiful mind in­deed that de­manded au­thor­i­ta­tive an­swers so strongly, that it would for­sake all good things to have some au­thor­ity be­yond it­self to fol­low.”

Obert: “All right… then maybe the rea­son moral­ity seems to bear cer­tain similar­i­ties to our hu­man con­sti­tu­tions, is that we could only per­ceive moral­ity at all, if we hap­pened, by luck, to evolve in con­so­nance with it.”

Sub­han: “Horse­ma­nure.”

Obert: “Fine… you’re right, that wasn’t very plau­si­ble. Look, I ad­mit you’ve driven me into quite a cor­ner here. But even if there were noth­ing more to moral­ity than prefer­ence, I would still pre­fer to act as moral­ity were real. I mean, if it’s all just prefer­ence, that way is as good as any­thing else—”

Sub­han: “Now you’re just try­ing to avoid fac­ing re­al­ity! Like some­one who says, ‘If there is no Heaven or Hell, then I may as well still act as if God’s go­ing to pun­ish me for sin­ning.’”

Obert: “That may be a good metaphor, in fact. Con­sider two the­ists, in the pro­cess of be­com­ing athe­ists. One says, ‘There is no Heaven or Hell, so I may as well cheat and steal, if I can get away with­out be­ing caught, since there’s no God to watch me.’ And the other says, ‘Even though there’s no God, I in­tend to pre­tend that God is watch­ing me, so that I can go on be­ing a moral per­son.’ Now they are both mis­taken, but the first is stray­ing much fur­ther from the path.”

Sub­han: “And what is the sec­ond one’s flaw? Failure to ac­cept per­sonal re­spon­si­bil­ity!

Obert: “Well, and I ad­mit I find that a more com­pel­ling ar­gu­ment than any­thing else you have said. Prob­a­bly be­cause it is a moral ar­gu­ment, and it has always been moral­ity, not metaethics, with which I claimed to be con­cerned. But even so, af­ter our whole con­ver­sa­tion, I still main­tain that want­ing to mur­der some­one does not make mur­der right. Every­thing that you have said about prefer­ence is in­ter­est­ing, but it is ul­ti­mately about prefer­ence—about minds and what they are de­signed to de­sire—and not about this other thing that hu­mans some­times talk about, ‘moral­ity’. I can just ask Moore’s Open Ques­tion: Why should I care about hu­man prefer­ences? What makes fol­low­ing hu­man prefer­ences right? By chang­ing a mind, you can change what it prefers; you can even change what it be­lieves to be right; but you can­not change what is right. Any­thing you talk about, that can be changed in this way, is not ‘right-ness’.”

Sub­han: “So you take re­fuge in ar­gu­ing from defi­ni­tions?”

Obert: “You know, when I re­flect on this whole ar­gu­ment, it seems to me that your po­si­tion has the definite ad­van­tage when it comes to ar­gu­ments about on­tol­ogy and re­al­ity and all that stuff—”

Sub­han: “‘All that stuff’? What else is there, be­sides re­al­ity?”

Obert: “Okay, the moral­ity-as-prefer­ence view­point is a lot eas­ier to shoe­horn into a uni­verse of quarks. But I still think the moral­ity-as-given view­point has the ad­van­tage when it comes to, you know, the ac­tual moral­ity part of it—giv­ing an­swers that are good in the sense of be­ing morally good, not in the sense of be­ing a good re­duc­tion­ist. Be­cause, you know, there are such things as moral er­rors, there is moral progress, and you re­ally shouldn’t go around think­ing that mur­der would be right if you wanted it to be right.”

Sub­han: “That sounds to me like the log­i­cal fal­lacy of ap­peal­ing to con­se­quences.”

Obert: “Oh? Well, it sounds to me like an in­com­plete re­duc­tion—one that doesn’t quite add up to nor­mal­ity.”

Part of The Me­taethics Sequence

Next post: “Where Re­cur­sive Jus­tifi­ca­tion Hits Bot­tom

Pre­vi­ous post: “Is Mo­ral­ity Prefer­ence?