Egoism In Disguise

Origi­nally posted at Liv­ing Within Reason

Epistemic sta­tus: mod­er­ately cer­tain, but open to be­ing con­vinced otherwise

tl;dr: any eth­i­cal sys­tem that re­lies on eth­i­cal in­tu­itions is just ego­ism that’s given a ve­neer of ob­jec­tivity.

Utili­tar­i­anism Relies on Mo­ral Intuitions

Most ra­tio­nal­ists are util­i­tar­i­ans, so much so that most ra­tio­nal­ist writ­ing as­sumes a util­i­tar­ian out­look. In a util­i­tar­ian sys­tem, what­ever is “good” is what max­i­mizes util­ity. Utility, tech­ni­cally, can be defined as any­thing, but most util­i­tar­i­ans at­tempt to max­i­mize the well-be­ing of hu­mans and, to some ex­tent, an­i­mals.

I am not a util­i­tar­ian. I am an ego­ist. I be­lieve that the only moral duty that we have is to act in our own self-in­ter­est (though gen­er­ally, it is in our self-in­ter­est to act in proso­cial ways most of the time). I feel a cer­tain aliena­tion from a lot of ra­tio­nal­ist writ­ing be­cause of this differ­ence. How­ever, I have long sus­pected that most util­i­tar­ian think­ing is largely the same thing as ego­ism.

Re­cently, Ozy of Thing of Things wrote a post that illus­trates this point well. Like a lot of ra­tio­nal­ist writ­ing, this is ad­dress­ing an eth­i­cal dilemma from a util­i­tar­ian frame­work. Ozy is try­ing to de­cide what crea­tures have a right to life, speci­fi­cally con­sid­er­ing hu­manely-raised an­i­mals, hu­man fe­tuses, and hu­man ba­bies. From the post:

Imag­ine that, among very wealthy peo­ple, there is a new fad for eat­ing ba­bies. Out baby farmer is an eth­i­cal per­son and he wants to make sure that his ba­bies are farmed as eth­i­cally as pos­si­ble. The ba­bies are pro­duced through ar­tifi­cial wombs; there are no adults who are par­tic­u­larly in­vested in the ba­bies’ con­tinued life. The ba­bies are slaugh­tered at one month, well be­fore they have long-term plans and prefer­ences that are thwarted by death. In their one month of life, the ba­bies have the hap­piest pos­si­ble baby life: they are picked up im­me­di­ately when­ever they cry, they get lots of deli­cious milk, they’re held and rocked and sung to, their med­i­cal con­cerns are treated quickly, and they don’t ever have to sit in a poopy di­aper. In ev­ery way, they live as happy and flour­ish­ing a life as a two-week-old baby can. Is the baby farm un­eth­i­cal?
If you’re like me, the an­swer is a quick “yes.”

Ozy’s main ev­i­dence for their con­clu­sion is speci­fi­cally stated to be their moral in­tu­ition, rest­ing on the idea that “I am hor­rified by the idea of a baby farm. I am not hor­rified by the idea of a beef cow farm.” Ozy goes on to ex­am­ine this in­tu­ition, weighs it against other moral in­tu­itions, and ul­ti­mately con­cludes that it is cor­rect.

This is not sur­pris­ing given that the ul­ti­mate au­thor­ity for any con­se­quen­tial­ist sys­tem is the in­di­vi­d­ual’s moral in­tu­itions (see Part 1). In a util­i­tar­ian sys­tem, moral in­tu­itions “are the only rea­son you be­lieve moral­ity ex­ists at all. They are also the stan­dards by which you judge all moral philoso­phies.” Peo­ple have many differ­ent moral in­tu­itions, and must weigh them against one an­other when it comes to difficult eth­i­cal ques­tions, but at bedrock, moral in­tu­itions are the ba­sis for the en­tire eth­i­cal sys­tem.

Mo­ral In­tu­itions Are Sub­jec­tive Preferences

From the pre­vi­ously-linked FAQ:

Mo­ral in­tu­itions are peo­ple’s ba­sic ideas about moral­ity. Some of them are hard-coded into the de­sign of the hu­man brain. Others are learned at a young age. They man­i­fest as be­liefs (“Hurt­ing an­other per­son is wrong”), emo­tions (such as feel­ing sad when­ever I see an in­no­cent per­son get hurt) and ac­tions (such as try­ing to avoid hurt­ing an­other per­son.)

No­tice that noth­ing in this ex­pla­na­tion ap­peals to any­thing ob­jec­tive. Ar­guably, “hard-coded into the de­sign of the hu­man brain” could be seen an ob­jec­tive, but it is also triv­ial. If I do not share a spe­cific in­tu­ition, then tau­tolog­i­cally it is not hard-coded into my brain so it can­not be used to re­solve a differ­ence of opinion.

Un­der a ego­ist wor­ld­view, there are still ethics, but they are based on self-in­ter­est. What is “good” is merely what I pre­fer. Hu­man flour­ish­ing is good be­cause the idea of hu­man flour­ish­ing makes me smile. Kick­ing pup­pies is bad be­cause it up­sets me. Th­ese are not moral rules that can bind any­one else. They are merely my prefer­ences, and to the ex­tent that I want oth­ers to con­form to my prefer­ences, I must con­vince or co­erce them.

The ego­ist out­look is en­tirely con­sis­tent with the util­i­tar­ian one. Con­sider the above para­graph, but rewrit­ten to em­pha­size the sub­jec­tivity:

[My] moral in­tu­itions are [my prefer­ences for how the world should be]. Some of them are hard-coded into the de­sign of [my] brain. Others are learned at a young age. They man­i­fest as be­liefs (“Hurt­ing an­other per­son is wrong”), emo­tions (such as feel­ing sad when­ever I see an in­no­cent per­son get hurt) and ac­tions (such as try­ing to avoid hurt­ing an­other per­son.)

The lan­guage is changed, but the ba­sic idea is the same. It em­pha­sizes that my moral rules are based en­tirely on what ap­peals to me. At its heart, any sys­tem that re­lies on moral in­tu­itions is in­dis­t­in­guish­able from ego­ism.

Why Does This Mat­ter?

In a sense, my con­clu­sion here is rather triv­ial. Who cares if util­i­tar­ian ethics and ego­ism are largely the same thing? As an ego­ist, shouldn’t I be happy about this and en­courage more peo­ple to be util­i­tar­i­ans?

The rea­son why I would pre­fer that more peo­ple ex­plic­itly ac­knowl­edge the ego­ist foun­da­tions of their moral the­ory is that I be­lieve moral judg­ment of oth­ers does great harm to our so­ciety. Utili­tar­i­anism dresses it­self up as ob­jec­tive, and there­fore leaves room to de­cide that other peo­ple have moral obli­ga­tions, and that we are free (or even obli­gated) to judge and/​or pun­ish them for their moral failings.

Mo­ral judg­ment of oth­ers makes us un­likely to ac­cept that no­body de­serves to suffer. If some­one be­haves im­morally, we of­ten feel that it is “jus­tice” to pun­ish that per­son re­gard­less of the prac­ti­cal effects of the pun­ish­ments. It leads to out­rage cul­ture and is a ma­jor im­ped­i­ment to adopt­ing an ev­i­dence-based crim­i­nal jus­tice sys­tem.

If we’re in­sist­ing on pun­ish­ing some­one for rea­sons other than try­ing to in­fluence (their or oth­ers’) fu­ture be­hav­ior, we are not mak­ing the world a bet­ter place. We are just be­ing cruel. No­body de­serves to suffer. Even the worse peo­ple in the world are just act­ing ac­cord­ing to their brain wiring. By all means, we should pun­ish bad be­hav­ior, but we should do it in a way that’s calcu­lated to in­fluence fu­ture be­hav­ior. We should rec­og­nize that, if we truly lived in a just world, ev­ery­one, even the worst of us, would have ev­ery­thing they want.

If, in­stead, we ac­knowl­edge that our moral be­liefs are merely prefer­ences for how we would like the world to work, we will in­flict less use­less suffer­ing. If we ac­knowl­edge that at­tempt­ing to force our moral­ity on some­one else is in­her­ently co­er­cive, we will use it only in cir­cum­stances where we feel that co­er­cion is jus­tified. We will stop pun­ish­ing peo­ple based on the idea of re­tri­bu­tion and can in­stead adopt an ev­i­dence-based sys­tem that only pun­ishes peo­ple if the pun­ish­ments are rea­son­ably likely to cre­ate bet­ter fu­ture out­comes.

I have a prefer­ence for less suffer­ing in the world. If you share that prefer­ence, con­sider adopt­ing an ex­plic­itly ego­ist moral­ity and en­courag­ing oth­ers with similar prefer­ences to do the same. We will never tame our most bar­baric im­pulses un­less we aban­don the idea that we are able to morally judge oth­ers.