Morality should be Moral

This ar­ti­cle is just some ma­jor ques­tions con­cern­ing moral­ity, then bro­ken up into sub-ques­tions to try to as­sist some­body in an­swer­ing the ma­jor ques­tion; it’s not a crit­i­cism of any moral­ity in par­tic­u­lar, but rather what I hope is a use­ful way to con­sider any moral sys­tem, and hope­fully to help peo­ple challenge their own as­sump­tions about their own moral sys­tems. I don’t ex­pect re­sponses to try to an­swer these ques­tions; in­deed, I’d pre­fer you don’t. My preferred re­sponses would be changes, ad­di­tions, clar­ifi­ca­tions, or challenges to the ques­tions or to the ob­jec­tive of this ar­ti­cle.

First ma­jor ques­tion: Could you morally ad­vo­cate other peo­ple adopt your moral sys­tem?

This isn’t as triv­ial a ques­tion as it seems on its face. Take a straw­man he­do­nism, for a very sim­ple ex­am­ple. Is a he­do­nist’s plea­sure max­i­mized by en­courag­ing other peo­ple to pur­sue -their- plea­sure? Or would it be bet­ter served by con­vinc­ing them to pur­sue other peo­ple’s (a class of peo­ple of which our straw­man he­do­nist is a mem­ber) plea­sure?

It’s not merely self­ish moral­ities which suffer meta-moral prob­lems. I’ve en­coun­tered a few near-Comtean al­tru­ists who will read­ily ad­mit their moral­ity makes them mis­er­able; the idea that other peo­ple are worse off than them fills them with a deep guilt which they can­not re­solve. If their goal is truly the hap­piness of oth­ers, spread­ing their moral sys­tem is a short-term evil. (It may be a long-term good, de­pend­ing on how they do their ac­count­ing, but non-moral al­tru­ism isn’t ac­tu­ally a rare qual­ity, so I think an hon­est ac­count­ing would sug­gest their moral sys­tem doesn’t add much ad­di­tional al­tru­ism to the sys­tem, only a lot of guilt about the fact that not much al­tru­is­tic ac­tion is tak­ing place.)

Note: I use the word “al­tru­ism” here in its mod­ern, non-Comtean sense. Altru­ism is that which benefits oth­ers.

Does your moral sys­tem make you un­happy, on the whole? Does it, like most moral sys­tems, place a value on hap­piness? Would it make the av­er­age per­son less or more happy, if they and they alone adopted it? Are your ex­pec­ta­tions of the moral value of your moral sys­tem pred­i­cated on an un­re­al­is­tic sce­nario of uni­ver­sal ac­cep­tance? Maybe your moral sys­tem isn’t it­self very moral.

Se­cond: Do you think your moral sys­tem makes you a more moral per­son?

Does your moral sys­tem pro­mote moral ac­tions? What per­centage of your ac­tions con­cern­ing your moral­ity are spent feel­ing good be­cause you feel like you’ve effec­tively pro­moted your moral sys­tem, rather than pro­mot­ing the val­ues in­her­ent in it?

Do you be­have any differ­ently than you would if you op­er­ated un­der a “com­mon law” moral­ity, such as so­cial norms and laws? That is, does your eth­i­cal sys­tem make you be­have differ­ently than if you didn’t pos­sess it? Are you eval­u­at­ing the mer­its of your moral sys­tem solely on how it an­swers hy­po­thet­i­cal situ­a­tions, rather than how it ad­dresses your day-to-day life?


Does your moral sys­tem pro­mote be­hav­iors you’re un­com­fortable with and/​or could not ac­tu­ally do, such as push­ing peo­ple in the way of trol­leys to save more peo­ple?

Third: Does your moral sys­tem pro­mote moral­ity, or it­self as a moral sys­tem?

Is the pri­mary con­tri­bu­tion of your moral sys­tem to your life adding out­rage that other peo­ple -don’t- fol­low your moral sys­tem? Do you feel that peo­ple who fol­low other moral sys­tems are im­moral even if they end up be­hav­ing in ex­actly the same way you do? Does your moral sys­tem im­ply com­plex calcu­la­tions which aren’t ac­tu­ally tak­ing place? Is the pri­mary pur­pose of your moral sys­tem en­courag­ing moral be­hav­ior, or defin­ing what the moral be­hav­ior would have been af­ter the fact?

Con­sid­ered as a meme or meme­plex, does your moral sys­tem seem bet­ter suited to prop­a­gat­ing it­self than to en­courag­ing moral­ity? Do you think “The pri­mary pur­pose of this moral sys­tem is en­sur­ing that these morals con­tinue to ex­ist” could be an ac­cu­rate de­scrip­tion of your moral sys­tem? Does the moral sys­tem pro­mote the be­lief that peo­ple who don’t fol­low it are com­pletely im­moral?

Fourth: Is the ma­jor pur­pose of your moral­ity moral­ity it­self?

This is a rather tough ques­tion to elab­o­rate with fur­ther ques­tions, so I sup­pose I should try to clar­ify a bit first: Take a straw­man util­i­tar­i­anism where “util­ity” -re­ally is- what the moral­ity is all about, where some­body has painstak­ingly gone through and as­signed util­ity points to var­i­ous things (this is kind of com­mon in game-based moral sys­tems, where you’re just ac­cu­mu­lat­ing some kind of moral points, pos­i­tive or nega­tive). Or imag­ine (tough, I know) a re­li­gious moral­ity where the sole ob­jec­tive of the moral sys­tem is satis­fy­ing God’s will. That is, does your moral sys­tem define moral­ity to be about some­thing ab­stract and im­mea­surable, defined only in the con­text of your moral sys­tem? Is your moral sys­tem a tau­tol­ogy, which must be ac­cepted to even be mean­ingful?

This one can be difficult to iden­tify from the in­side, be­cause to some ex­tent -all- hu­man moral­ity is tau­tolog­i­cal; you have to iden­tify it with re­spect to other moral­ities, to see if it’s a unique is­land of tau­tol­ogy, or whether it ap­plies to hu­man moral con­cerns in the gen­eral case. With that in mind, when you ar­gue with other peo­ple about your eth­i­cal sys­tem, do they -always- seem to miss the point? Do they keep try­ing to re­frame moral ques­tions in terms of other moral sys­tems? Do they bring up things which have noth­ing to do with (your) moral­ity?