Is Morality Preference?

Fol­lowup to: Mo­ral Complexities

In the di­alogue “The Be­drock of Fair­ness”, I in­tended Yancy to rep­re­sent moral­ity-as-raw-fact, Zaire to rep­re­sent moral­ity-as-raw-whim, and Xan­non to be a par­tic­u­lar kind of at­tempt at com­pro­mis­ing be­tween them. Nei­ther Xan­non, Yancy, or Zaire rep­re­sent my own views—rather they are, in their dis­agree­ment, show­ing the prob­lem that I am try­ing to solve. It is fu­tile to pre­sent an­swers to which ques­tions are lack­ing.

But char­ac­ters have in­de­pen­dent life in the minds of all read­ers; when I cre­ate a di­alogue, I don’t view my au­tho­rial in­tent as pri­mary. Any good in­ter­pre­ta­tion can be dis­cussed. I meant Zaire to be ask­ing for half the pie out of pure self­ish­ness; many read­ers in­ter­preted this as a gen­uine need… which is as in­ter­est­ing a dis­cus­sion to have as any, though it’s a differ­ent dis­cus­sion.

With this in mind, I turn to Sub­han and Obert, who shall try to an­swer yes­ter­day’s ques­tions on be­half of their re­spec­tive view­points.

Sub­han makes the open­ing state­ment:

Sub­han: “I defend this propo­si­tion: that there is no rea­son to talk about a ‘moral­ity’ dis­tinct from what peo­ple want.”

Obert: “I challenge. Sup­pose some­one comes to me and says, ‘I want a slice of that pie you’re hold­ing.’ It seems to me that they have just made a very differ­ent state­ment from ‘It is right that I should get a slice of that pie’. I have no rea­son at all to doubt the former state­ment—to sup­pose that they are ly­ing to me about their de­sires. But when it comes to the lat­ter propo­si­tion, I have rea­son in­deed to be skep­ti­cal. Do you say that these two state­ments mean the same thing?”

Sub­han: “I sug­gest that when the pie-re­quester says to you, ‘It is right for me to get some pie’, this as­serts that you want the pie-re­quester to get a slice.”

Obert: “Why should I need to be told what I want?”

Sub­han: “You take a need­lessly re­stric­tive view of want­ing, Obert; I am not set­ting out to re­duce hu­mans to crea­tures of an­i­mal in­stinct. Your wants in­clude those de­sires you la­bel ‘moral val­ues’, such as want­ing the hun­gry to be fed—”

Obert: “And you see no dis­tinc­tion be­tween my de­sire to feed the hun­gry, and my de­sire to eat all the deli­cious pie my­self?”

Sub­han: “No! They are both de­sires—backed by differ­ent emo­tions, per­haps, but both de­sires. To con­tinue, the pie-re­quester hopes that you have a de­sire to feed the hun­gry, and so says, ‘It is right that I should get a slice of this pie’, to re­mind you of your own de­sire. We do not au­to­mat­i­cally know all the con­se­quences of our own wants; we are not log­i­cally om­ni­scient.”

Obert: “This seems psy­cholog­i­cally un­re­al­is­tic—I don’t think that’s what goes through the mind of the per­son who says, ‘I have a right to some pie’. In this lat­ter case, if I deny them pie, they will feel in­dig­nant. If they are only try­ing to re­mind me of my own de­sires, why should they feel in­dig­nant?”

Sub­han: “Be­cause they didn’t get any pie, so they’re frus­trated.”

Obert: “Un­re­al­is­tic! Indig­na­tion at moral trans­gres­sions has a psy­cholog­i­cal di­men­sion that goes be­yond strug­gling with a struck door.”

Sub­han: “Then con­sider the evolu­tion­ary psy­chol­ogy. The pie-re­quester’s emo­tion of in­dig­na­tion would evolve as a dis­play, first to re­mind you of the po­ten­tial con­se­quences of offend­ing fel­low tribe-mem­bers, and sec­ond, to re­mind any ob­serv­ing tribe-mem­bers of goals they may have to feed the hun­gry. By re­fus­ing to share, you would offend against a so­cial norm—which is to say, a widely shared want.”

Obert: “So you take re­fuge in so­cial wants as the essence of moral­ity? But peo­ple seem to see a differ­ence be­tween de­sire and moral­ity, even in the quiet of their own minds. They say things like: ‘I want X, but the right thing to do is Y… what shall I do?’”

Sub­han: “So they ex­pe­rience a con­flict be­tween their want to eat pie, and their want to feed the hun­gry—which they know is also a want of so­ciety. It’s not pre­de­ter­mined that the proso­cial im­pulse will be vic­to­ri­ous, but they are both im­pulses.”

Obert: “And when, dur­ing WWII, a Ger­man hides Jews in their base­ment—against the wants of sur­round­ing so­ciety—how then?”

Sub­han: “Peo­ple do not always define their in-group by look­ing at their next-door neigh­bors; they may con­ceive of their group as ‘good Chris­ti­ans’ or ‘hu­man­i­tar­i­ans’.”

Obert: “I should sooner say that peo­ple choose their in-groups by look­ing for oth­ers who share their be­liefs about moral­ity—not that they con­struct their moral­ity from their in-group.”

Sub­han: “Oh, re­ally? I should not be sur­prised if that were ex­per­i­men­tally testable—if so, how much do you want to bet?”

Obert: “That the Ger­mans who hid Jews in their base­ments, chose who to call their peo­ple by look­ing at their be­liefs about moral­ity? Sure. I’d bet on that.”

Sub­han: “But in any case, even if a Ger­man re­sister has a de­sire to pre­serve life which is so strong as to go against their own per­ceived ‘so­ciety’, it is still their de­sire.”

Obert: “Yet they would at­tribute to that de­sire, the same dis­tinc­tion they make be­tween ‘right’ and ‘want’—even when go­ing against so­ciety. They might think to them­selves, ‘How dearly I wish I could stay out of this, and keep my fam­ily safe. But it is my duty to hide these Jews from the Nazis, and I must fulfill that duty.’ There is an in­ter­est­ing moral ques­tion, as to whether it re­veals greater hero­ism, to fulfill a duty ea­gerly, or to fulfill your du­ties when you are not ea­ger. For my­self I should just to­tal up the lives saved, and call that their score. But I digress… The dis­tinc­tion be­tween ‘right’ and ‘want’ is not ex­plained by your dis­tinc­tion of so­cially shared and in­di­vi­d­ual wants. The dis­tinc­tion be­tween de­sire and duty seems to me a ba­sic thing, which some­one could ex­pe­rience float­ing alone in a space­suit a thou­sand light-years from com­pany.”

Sub­han: “Even if I were to grant this psy­cholog­i­cal dis­tinc­tion, per­haps that is sim­ply a mat­ter of emo­tional fla­vor­ing. Why should I not de­scribe per­ceived du­ties as a differ­ently fla­vored want?”

Obert: “Du­ties, and should-ness, seem to have a di­men­sion that goes be­yond our whims. If we want differ­ent pizza top­pings to­day, we can or­der a differ­ent pizza with­out guilt; but we can­not choose to make mur­der a good thing.”

Sub­han: “Schopen­hauer: ‘A man can do as he wills, but not will as he wills.’ You can­not de­cide to make salad taste bet­ter to you than cheese­burg­ers, and you can­not de­cide not to dis­like mur­der. Fur­ther­more, peo­ple do change, albeit rarely, those wants that you name ‘val­ues’; in­deed they are eas­ier to change than our food tastes.”

Obert: “Ah! That is some­thing I meant to ask you about. Peo­ple some­times change their morals; I would call this up­dat­ing their be­liefs about moral­ity, but you would call it chang­ing their wants. Why would any­one want to change their wants?”

Sub­han: “Per­haps they sim­ply find that their wants have changed; brains do change over time. Per­haps they have formed a ver­bal be­lief about what they want, which they have dis­cov­ered to be mis­taken. Per­haps so­ciety has changed, or their per­cep­tion of so­ciety has changed. But re­ally, in most cases you don’t have to go that far, to ex­plain ap­par­ent changes of moral­ity.”

Obert: “Oh?”

Sub­han: “Let’s say that some­one be­gins by think­ing that Com­mu­nism is a good so­cial sys­tem, has some ar­gu­ments, and ends by be­liev­ing that Com­mu­nism is a bad so­cial sys­tem. This does not mean that their ends have changed—they may sim­ply have got­ten a good look at the his­tory of Rus­sia, and de­cided that Com­mu­nism is a poor means to the end of rais­ing stan­dards of liv­ing. I challenge you to find me a case of chang­ing moral­ity in which peo­ple change their ter­mi­nal val­ues, and not just their be­liefs about which acts have which con­se­quences.”

Obert: “Some­one be­gins by be­liev­ing that God or­dains against pre­mar­i­tal sex; they find out there is no God; sub­se­quently they ap­prove of pre­mar­i­tal sex. This, let us spec­ify, is not be­cause of fear of Hell; but be­cause pre­vi­ously they be­lieved that God had the power to or­dain, or knowl­edge to tell them, what is right; in ceas­ing to be­lieve in God, they up­dated their be­lief about what is right.”

Sub­han: “I am not re­spon­si­ble for straight­en­ing oth­ers’ con­fu­sions; this one is merely in a gen­eral state of disar­ray around the ‘God’ con­cept.”

Obert: “All right; sup­pose I get into a moral ar­gu­ment with a man from a so­ciety that prac­tices fe­male cir­cum­ci­sion. I do not think our ar­gu­ment is about the con­se­quences to the woman; the ar­gu­ment is about the moral­ity of these con­se­quences.”

Sub­han: “Per­haps the one falsely be­lieves that women have no feel­ings—”

Obert: “Un­re­al­is­tic, un­re­al­is­tic! It is far more likely that the one hasn’t re­ally con­sid­ered whether the woman has feel­ings, be­cause he doesn’t see any obli­ga­tion to care. The hap­piness of women is not a ter­mi­nal value to him. Thou­sands of years ago, most so­cieties de­val­ued con­se­quences to women. They also had false be­liefs about women, true—and false be­liefs about men as well, for that mat­ter—but noth­ing like the Vic­to­rian era’s com­plex ra­tio­nal­iza­tions for how pa­ter­nal­is­tic rules re­ally benefited women. The Old Tes­ta­ment doesn’t ex­plain why it lev­ies the death penalty for a woman wear­ing men’s cloth­ing. It cer­tainly doesn’t ex­plain how this rule re­ally benefits women af­ter all. It’s not the sort of ar­gu­ment it would have oc­curred to the au­thors to ra­tio­nal­ize! They didn’t care about the con­se­quences to women.”

Sub­han: “So they wanted differ­ent things than you; what of it?”

Obert: “See, now that is ex­actly why I can­not ac­cept your view­point. Some­how, so­cieties went from Old Tes­ta­ment at­ti­tudes, to democ­ra­cies with fe­male suffrage. And this tran­si­tion—how­ever it oc­curred—was caused by peo­ple say­ing, ‘What this so­ciety does to women is a great wrong!’, not, ‘I would per­son­ally pre­fer to treat women bet­ter.’ That’s not just a change in se­man­tics—it’s the differ­ence be­tween be­ing obli­gated to stand and de­liver a jus­tifi­ca­tion, ver­sus be­ing able to just say, ‘Well, I pre­fer differ­ently, end of dis­cus­sion.’ And who says that hu­mankind has finished with its moral progress? You’re yank­ing the lad­der out from un­der­neath a very im­por­tant climb.”

Sub­han: “Let us sup­pose that the change of hu­man so­cieties over the last ten thou­sand years, has been ac­com­panied by a change in ter­mi­nal val­ues—”

Obert: “You call this a sup­po­si­tion? Modern poli­ti­cal de­bates turn around vastly differ­ent val­u­a­tions of con­se­quences than in an­cient Greece!”

Sub­han: “I am not so sure; hu­man cog­ni­tive psy­chol­ogy has not had time to change evolu­tion­ar­ily over that pe­riod. Modern democ­ra­cies tend to ap­peal to our em­pa­thy for those suffer­ing; that em­pa­thy ex­isted in an­cient Greece as well, but it was in­voked less of­ten. In each sin­gle mo­ment of ar­gu­ment, I doubt you would find mod­ern poli­ti­ci­ans ap­peal­ing to emo­tions that didn’t ex­ist in an­cient Greece.”

Obert: “I’m not say­ing that emo­tions have changed; I’m say­ing that be­liefs about moral­ity have changed. Em­pa­thy merely pro­vides emo­tional depth to an ar­gu­ment that can be made on a purely log­i­cal level: ‘If it’s wrong to en­slave you, if it’s wrong to en­slave your fam­ily and your friends, then how can it be right to en­slave peo­ple who hap­pen to be a differ­ent color? What differ­ence does the color make?’ If moral­ity is just prefer­ence, then there’s a very sim­ple an­swer: ‘There is no right or wrong, I just like my own fam­ily bet­ter.’ You see the prob­lem here?”

Sub­han: “Log­i­cal fal­lacy: Ap­peal to con­se­quences.

Obert: “I’m not ap­peal­ing to con­se­quences. I’m show­ing that when I rea­son about ‘right’ or ‘wrong’, I am rea­son­ing about some­thing that does not be­have like ‘want’ and ‘don’t want’.”

Sub­han: “Oh? But I think that in re­al­ity, your re­jec­tion of moral­ity-as-prefer­ence has a great deal to do with your fear of where the truth leads.”

Obert: “Log­i­cal fal­lacy: Ad hominem.

Sub­han: “Fair enough. Where were we?”

Obert: “If moral­ity is prefer­ence, why would you want to change your wants to be more in­clu­sive? Why would you want to change your wants at all?”

Sub­han: “The an­swer to your first ques­tion prob­a­bly has to do with a fair­ness in­stinct, I would sup­pose—a no­tion that the tribe should have the same rules for ev­ery­one.”

Obert: “I don’t think that’s an in­stinct. I think that’s a triumph of three thou­sand years of moral philos­o­phy.”

Sub­han: “That could be tested.”

Obert: “And my sec­ond ques­tion?”

Sub­han: “Even if ter­mi­nal val­ues change, it doesn’t mean that ter­mi­nal val­ues are stored on a great stone tablet out­side hu­man­ity. In­deed, it would seem to ar­gue against it! It just means that some of the events that go on in our brains, can change what we want.”

Obert: “That’s your con­cept of moral progress? That’s your view of the last three thou­sand years? That’s why we have free speech, democ­racy, mass street protests against wars, non­lethal weapons, no more slav­ery—”

Sub­han: “If you wan­der on a ran­dom path, and you com­pare all past states to your pre­sent state, you will see con­tin­u­ous ‘ad­vance­ment’ to­ward your pre­sent con­di­tion—”

Obert: “Wan­der on a ran­dom path?

Sub­han: “I’m just point­ing out that say­ing, ‘Look how much bet­ter things are now’, when your crite­rion for ‘bet­ter’ is com­par­ing past moral val­ues to yours, does not es­tab­lish any di­rec­tional trend in hu­man progress.”

Obert: “Your strange be­liefs about the na­ture of moral­ity have de­stroyed your soul. I don’t even be­lieve in souls, and I’m say­ing that.”

Sub­han: “Look, de­pend­ing on which ar­gu­ments do, in fact, move us, you might be able to re­gard the pro­cess of chang­ing ter­mi­nal val­ues as a di­rec­tional progress. You might be able to show that the change had a con­sis­tent trend as we thought of more and more ar­gu­ments. But that doesn’t show that moral­ity is some­thing out­side us. We could even—though this is psy­cholog­i­cally un­re­al­is­tic—choose to re­gard you as com­put­ing a con­verg­ing ap­prox­i­ma­tion to your ‘ideal wants’, so that you would have meta-val­ues that defined both your pre­sent value and the rules for up­dat­ing them. But these would be your meta-val­ues and your ideals and your com­pu­ta­tion, just as much as pep­per­oni is your own taste in pizza top­pings. You may not know your real fa­vorite ever pizza top­ping, un­til you’ve tasted many pos­si­ble fla­vors.”

Obert: “Leav­ing out what it is that you just com­pared to pizza top­pings, I be­gin to be sus­pi­cious of the all-em­brac­ing­ness of your view­point. No mat­ter what my mind does, you can sim­ply call it a still-more-mod­ified ‘want’. I think that you are the one suffer­ing from meta-level con­fu­sion, not I. Ap­peal­ing to right is not the same as ap­peal­ing to de­sire. Just be­cause the ap­peal is judged in­side my brain, doesn’t mean that the ap­peal is not to some­thing more than my de­sires. Why can’t my brain com­pute du­ties as well as de­sires?”

Sub­han: “What is the differ­ence be­tween duty and de­sire?”

Obert: “A duty is some­thing you must do whether you want to or not.”

Sub­han: “Now you’re just be­ing in­co­her­ent. Your brain com­putes some­thing it wants to do whether it wants to or not?”

Obert: “No, you are the one whose the­ory makes this in­co­her­ent. Which is why your the­ory ul­ti­mately fails to add up to moral­ity.”

Sub­han: “I say again that you un­der­es­ti­mate the power of mere want­ing. And more: You ac­cuse me of in­co­her­ence? You say that I suffer from meta-level con­fu­sion?”

Obert: “Er… yes?”

To be con­tinued...

Part of The Me­taethics Sequence

Next post: “Is Mo­ral­ity Given?

Pre­vi­ous post: “Mo­ral Com­plex­ities