Morality Isn’t Logical

What do I mean by “moral­ity isn’t log­i­cal”? I mean in the same sense that math­e­mat­ics is log­i­cal but liter­ary crit­i­cism isn’t: the “rea­son­ing” we use to think about moral­ity doesn’t re­sem­ble log­i­cal rea­son­ing. All sys­tems of logic, that I’m aware of, have a con­cept of proof and a method of ver­ify­ing with high de­gree of cer­tainty whether an ar­gu­ment con­sti­tutes a proof. As long as the logic is con­sis­tent (and we have good rea­son to think that many of them are), once we ver­ify a proof we can ac­cept its con­clu­sion with­out wor­ry­ing that there may be an­other proof that makes the op­po­site con­clu­sion. With moral­ity though, we have no such method, and peo­ple all the time make moral ar­gu­ments that can be re­versed or called into ques­tion by other moral ar­gu­ments. (Edit: For an ex­am­ple of this, see these posts.)

Without be­ing a sys­tem of logic, moral philo­soph­i­cal rea­son­ing likely (or at least plau­si­bly) doesn’t have any of the nice prop­er­ties that a well-con­structed sys­tem of logic would have, for ex­am­ple, con­sis­tency, val­idity, sound­ness, or even the more ba­sic prop­erty that con­sid­er­ing ar­gu­ments in a differ­ent or­der, or in a differ­ent mood, won’t cause a per­son to ac­cept an en­tirely differ­ent set of con­clu­sions. For all we know, some­body try­ing to rea­son about a moral con­cept like “fair­ness” may just be tak­ing a ran­dom walk as they move from one con­clu­sion to an­other based on moral ar­gu­ments they en­counter or think up.

In a re­cent post, Eliezer said “moral­ity is logic”, by which he seems to mean… well, I’m still not ex­actly sure what, but one in­ter­pre­ta­tion is that a per­son’s cog­ni­tion about moral­ity can be de­scribed as an al­gorithm, and that al­gorithm can be stud­ied us­ing log­i­cal rea­son­ing. (Which of course is true, but in that sense both math and liter­ary crit­i­cism as well as ev­ery other sub­ject of hu­man study would be logic.) In any case, I don’t think Eliezer is ex­plic­itly claiming that an al­gorithm-for-think­ing-about-moral­ity con­sti­tutes an al­gorithm-for-do­ing-logic, but I worry that the char­ac­ter­i­za­tion of “moral­ity is logic” may cause some con­no­ta­tions of “logic” to be in­ap­pro­pri­ately sneaked into “moral­ity”. For ex­am­ple Eliezer seems to (at least at one point) as­sume that con­sid­er­ing moral ar­gu­ments in a differ­ent or­der won’t cause a hu­man to ac­cept an en­tirely differ­ent set of con­clu­sions, and maybe this is why. To fight this po­ten­tial sneak­ing of con­no­ta­tions, I sug­gest that when you see the phrase “moral­ity is logic”, re­mind your­self that moral­ity isn’t log­i­cal.