Arguments Against Speciesism

There have been some posts about an­i­mals lately, for in­stance here and here. While nor­ma­tive as­sump­tions about the treat­ment of non­hu­mans played an im­por­tant role in the ar­ti­cles and were de­bated at length in the com­ment sec­tions, I was miss­ing a con­cise sum­mary of these ar­gu­ments. This post from over a year ago comes clos­est to what I have in mind, but I want to fo­cus on some of the is­sues in more de­tail.

A while back, I read the fol­low­ing com­ment in a LessWrong dis­cus­sion on up­loads:

I do not at all un­der­stand this PETA-like ob­ses­sion with eth­i­cal treat­ment of bits.

Aside from (car­bon-based) hu­mans, which other be­ings de­serve moral con­sid­er­a­tion? Non­hu­man an­i­mals? In­tel­li­gent aliens? Uploads? Noth­ing else?

This ar­ti­cle is in­tended to shed light on these ques­tions; it is how­ever not the in­tent of this post to ad­vo­cate a spe­cific eth­i­cal frame­work. In­stead, I’ll try to show that some eth­i­cal prin­ci­ples held by a lot of peo­ple are in­con­sis­tent with some of their other at­ti­tudes—an ar­gu­ment that doesn’t rely on ethics be­ing uni­ver­sal or ob­jec­tive.

More pre­cisely, I will de­velop the ar­gu­ments be­hind anti-speciesism (and the re­jec­tion of analo­gous forms of dis­crim­i­na­tion, such as dis­crim­i­na­tion against up­loads) to point out com­mon in­con­sis­ten­cies in some peo­ple’s val­ues. This will also provide an illus­tra­tive ex­am­ple of how co­her­en­tist eth­i­cal rea­son­ing can be ap­plied to shared in­tu­itions. If there are no shared in­tu­itions, eth­i­cal dis­course will likely be un­fruit­ful, so it is likely that not ev­ery­one will draw the same con­clu­sions from the ar­gu­ments here.

What Is Speciesism?

Speciesism, a term pop­u­larized (but not coined) by the philoso­pher Peter Singer, is meant to be analo­gous to sex­ism or racism. It refers to a dis­crim­i­na­tory at­ti­tude against a be­ing where less eth­i­cal con­sid­er­a­tion i.e. car­ing less about a be­ing’s welfare or in­ter­ests is given solely be­cause of the “wrong” species mem­ber­ship. The “solely” here is cru­cial, and it’s mi­s­un­der­stood of­ten enough to war­rant the re­dun­dant em­pha­sis.

For in­stance, it is not speciesist to deny pigs the right to vote, just like it is not sex­ist to deny men the right to have an abor­tion performed on their body. Treat­ing be­ings of differ­ent species differ­ently is not speciesist if there are rele­vant crite­ria for do­ing so.

Singer sum­ma­rized his case against speciesism in this es­say. The ar­gu­ment that does most of the work is of­ten referred to as the ar­gu­ment from marginal cases. A per­haps less an­thro­pocen­tric, more fit­ting name would be ar­gu­ment from species over­lap, as some philoso­phers (e.g. Os­car Horta) have pointed out.

The ar­gu­ment boils down to the ques­tion of choos­ing rele­vant crite­ria for moral con­cern. What prop­er­ties do hu­man be­ings pos­sess that makes us think that it is wrong to tor­ture them? Or to kill them? (Note that these are two differ­ent ques­tions.) The ar­gu­ment from species over­lap points out that all the typ­i­cal or plau­si­ble sug­ges­tions for rele­vant crite­ria ap­ply equally to dogs, pigs or chick­ens as they do to hu­man in­fants or late-stage Alzheimer pa­tients. There­fore, giv­ing less eth­i­cal con­sid­er­a­tion to the former would be based merely on species mem­ber­ship, which is just as ar­bi­trary as choos­ing race or sex as rele­vant crite­rion (fur­ther jus­tifi­ca­tion for that claim fol­lows be­low).

Here are some ex­am­ples for com­monly sug­gested crite­ria. Those who want may pause at this point and think about the crite­ria they con­sult for whether it is wrong to in­flict suffer­ing on a be­ing (and sep­a­rately, those that are rele­vant for the wrong­ness of kil­ling).

The sug­ges­tions are:

A: Ca­pac­ity for moral reasoning

B: Be­ing able to reciprocate

C: (Hu­man-like) intelligence

D: Self-awareness

E: Fu­ture-re­lated prefer­ences; fu­ture plans

E’: Prefer­ences /​ in­ter­ests (in gen­eral)

F: Sen­tience (ca­pac­ity for suffer­ing and hap­piness)

G: Life /​ biolog­i­cal complexity

H: What I care about /​ feel sym­pa­thy or loy­alty towards

The ar­gu­ment from species over­lap points out that not all hu­mans are equal. The sen­ti­ment be­hind “all hu­mans are equal” is not that they are liter­ally equal, but that equal in­ter­ests/​ca­pac­i­ties de­serve equal con­sid­er­a­tion. None of the above crite­ria ex­cept (in some em­piri­cal cases) H im­ply that hu­man in­fants or late stage de­mented peo­ple should be given more eth­i­cal con­sid­er­a­tion than cows, pigs or chick­ens.

While H is an un­likely crite­rion for di­rect eth­i­cal con­sid­er­a­tion (it could jus­tify geno­cide in spe­cific cir­cum­stances!), it is an im­por­tant in­di­rect fac­tor. Most hu­mans have much more em­pa­thy for fel­low hu­mans than for non­hu­man an­i­mals. While this is not a crite­rion for giv­ing hu­mans more eth­i­cal con­sid­er­a­tion per se, it is nev­er­the­less a fac­tor that strongly in­fluences eth­i­cal de­ci­sion-mak­ing in real-life.

How­ever, such fac­tors can’t ap­ply for eth­i­cal rea­son­ing at a the­o­ret­i­cal/​nor­ma­tive level, where all the rele­vant vari­ables are looked at in iso­la­tion in or­der to come up with a con­sis­tent eth­i­cal frame­work that cov­ers all pos­si­ble cases.

If there were no in­trin­sic rea­sons for giv­ing moral con­sid­er­a­tion to ba­bies, then a so­ciety in which some ba­bies were (fac­tory-)farmed would be to­tally fine as long as the peo­ple are okay with it. If we con­sider this im­pli­ca­tion to be un­ac­cept­able, then the same must ap­ply for the situ­a­tions non­hu­man an­i­mals find them­selves in on farms.

Side note: The ques­tion whether kil­ling a given be­ing is wrong, and if so, “why” and “how wrong ex­actly”, is com­plex and out­side the scope of this ar­ti­cle. In­stead of on kil­ling, the fo­cus will be on suffer­ing, and by suffer­ing I mean some­thing like want­ing to get out of one’s cur­rent con­scious state, or want­ing to change some as­pect about it. The em­piri­cal is­sue of which be­ings are ca­pa­ble of suffer­ing is a differ­ent mat­ter that I will (only briefly) dis­cuss be­low. So in this con­text, giv­ing a be­ing moral con­sid­er­a­tion means that we don’t want it to suffer, leav­ing open the ques­tion whether kil­ling it painlessly is bad/​neu­tral/​good or pro­hibited/​per­mis­si­ble/​obli­ga­tory.

The main con­clu­sion so far is that if we care about all the suffer­ing of mem­bers of the hu­man species, and if we re­ject ques­tion-beg­ging rea­son­ing that could also be used to jus­tify racism or other forms of dis­crim­i­na­tion, then we must also care fully about suffer­ing hap­pen­ing in non­hu­man an­i­mals. This would im­ply that x amount of suffer­ing is just as bad, i.e. that we care about it just as much, in non­hu­man an­i­mals as in hu­mans, or in aliens or in up­loads. (Though ad­mit­tedly the lat­ter wouldn’t be anti-speciesist but rather anti-”sub­stratist”, or anti-”fleshist”.)

The claim is that there is no way to block this con­clu­sion with­out:

1. us­ing rea­son­ing that could analog­i­cally be used to jus­tify racism or sex­ism
2. us­ing rea­son­ing that al­lows for hy­po­thet­i­cal cir­cum­stances where it would be okay (or even called for) to tor­ture ba­bies in cases where util­i­tar­ian calcu­la­tions pro­hibit it.

I’ve tried and have asked oth­ers to try—with­out suc­cess.

Car­ing about suffering

I have not given a rea­son why tor­tur­ing ba­bies or racism is bad or wrong. I’m hop­ing that the vast ma­jor­ity of peo­ple will share that in­tu­ition/​value of mine, that they want to be the sort of per­son who would have been amongst those challeng­ing racist or sex­ist prej­u­dices, had they lived in the past.

Some might be will­ing to bite the bul­let at this point, trust­ing some strongly held eth­i­cal prin­ci­ple of theirs (e.g. A, B, C, D, or E above), to the con­clu­sion of ex­clud­ing hu­mans who lack cer­tain cog­ni­tive ca­pac­i­ties from moral con­cern. One could point out that peo­ple’s em­pa­thy and in­di­rect con­sid­er­a­tions about hu­man rights, so­cietal sta­bil­ity and so on, will en­sure that this “loop­hole” in such an eth­i­cal view al­most cer­tainly re­mains with­out con­se­quences for be­ings with hu­man DNA. It is a con­ve­nient Schel­ling point af­ter all to care about all hu­mans (or at least all hu­mans out­side their mother’s womb). How­ever, I don’t see why ab­surd con­clu­sions that will likely re­main hy­po­thet­i­cal would be sig­nifi­cantly less bad than other ab­surd con­clu­sions. Their mere pos­si­bil­ity un­der­mines the whole foun­da­tion one’s de­ci­sional al­gorithm is grounded in. (Com­pare hy­po­thet­i­cal prob­lems for spe­cific de­ci­sion the­o­ries.)

Fur­ther­more, while D and E seem plau­si­ble can­di­dates for rea­sons against kil­ling a be­ing with these prop­er­ties (E is in fact Peter Singer’s view on the mat­ter), none of the crite­ria from A to E seem rele­vant to suffer­ing, to whether a be­ing can be harmed or benefit­ted. The case for these be­ing bot­tom-up morally rele­vant crite­ria for the rele­vance of suffer­ing (or hap­piness) is very weak, to say the least.

Maybe that’s the speciesist’s cen­tral con­fu­sion, that the ra­tio­nal­ity/​sapi­ence of a be­ing is some­how rele­vant for whether its suffer­ing mat­ters morally. Clearly, for us our­selves, this does not seem to be the case. If I was told that some evil sci­en­tist would first op­er­ate on my brain to (tem­porar­ily) lower my IQ and cog­ni­tive abil­ities, and then tor­ture me af­ter­wards, it is not like I will be less afraid of the tor­ture or care less about avert­ing it!

Those who do con­sider bit­ing the bul­let should ask them­selves whether they would have defended that view in all con­texts, or whether they might be driven to­wards such a con­clu­sion by a self-serv­ing bias. There seems to be a strange and sud­den in­crease in the fre­quency of peo­ple who are will­ing to claim that there is noth­ing in­trin­si­cally wrong with tor­tur­ing ba­bies when the sub­ject is an­i­mal rights, or more speci­fi­cally, the steak they in­tend to have for din­ner.

It is an en­tirely differ­ent mat­ter if peo­ple gen­uinely think that an­i­mals or hu­man in­fants or late-stage de­mented peo­ple are not sen­tient. To be clear about what is meant by sen­tience:

A sen­tient be­ing is one for whom “it feels like some­thing to be that be­ing”.

I find it highly im­plau­si­ble that only self-aware or “sapi­ent” be­ings are sen­tient, but if true, this would con­sti­tute a com­pel­ling rea­son against car­ing for at least most non­hu­man an­i­mals, for the same rea­son that it would pointless to care about peb­bles for the peb­bles’ sake. If all non­hu­mans truly weren’t sen­tient, then ob­vi­ously singling out hu­mans for the sphere of moral con­cern would not be speciesist.

What ir­ri­tates me, how­ever, is that any­one ad­vo­cat­ing such a view should, it seems to me, still have to fac­tor in a sig­nifi­cant prob­a­bil­ity of be­ing wrong, given that both philos­o­phy of mind and the neu­ro­science that goes with it are hard and, as far as I’m aware, not quite set­tled yet. The is­sue mat­ters be­cause of the huge num­bers of non­hu­man an­i­mals at stake and be­cause of the ter­rible con­di­tions these be­ings live in.

I rarely see this un­cer­tainty ac­knowl­edged. If we imag­ine the tor­ture-sce­nario out­lined above, how con­fi­dent would we re­ally be that the tor­ture “won’t mat­ter” if our own ad­vanced cog­ni­tive ca­pac­i­ties are tem­porar­ily sus­pended?

Why species mem­ber­ship re­ally is an ab­surd criterion

In the be­gin­ning of the ar­ti­cle, I wrote that I’d get back to this for those not con­vinced. Some read­ers may still feel that there is some­thing spe­cial about be­ing a mem­ber of the hu­man species. Some may be tempted to think about the con­cept of “species” as if it were a fun­da­men­tal con­cept, a Pla­tonic form.

The fol­low­ing likely isn’t news to most of the LW au­di­ence, but it is worth spel­ling it out any­way: There ex­ists a con­tinuum of “species” in thing-space as well as in the ac­tual evolu­tion­ary timescale. The species bound­aries seem ob­vi­ous just be­cause the in­ter­me­di­ates kept evolv­ing or went ex­tinct. And even if that were not the case, we could imag­ine it. The the­o­ret­i­cal pos­si­bil­ity is enough to make the philo­soph­i­cal case, even though psy­cholog­i­cally, ac­tu­al­ities are more con­vinc­ing.

We can imag­ine a con­tin­u­ous line-up of an­ces­tors, always daugh­ter and mother, from mod­ern hu­mans back to the com­mon an­ces­tor of hu­mans and, say, cows, and then for­ward in time again to mod­ern cows. How would we then di­vide this line up into dis­tinct species? Mo­rally sig­nifi­cant lines would have to be drawn be­tween mother and daugh­ter, but that seems ab­surd! There are sev­eral differ­ent defi­ni­tions of “species” used in biol­ogy. A com­mon crite­rion—for sex­u­ally re­pro­duc­ing or­ganisms any­way—is whether groups of be­ings (of differ­ent sex) can have fer­tile offspring to­gether. If so, they be­long to the same species.

That is a rather odd way of de­ter­min­ing whether one cares about the suffer­ing of some ho­minid crea­ture in the line-up of an­ces­tors—why should that for in­stance be rele­vant in re­gard to de­ter­min­ing whether some in­stance of suffer­ing mat­ters to us?

More­over, is that re­ally the ter­mi­nal value of peo­ple who claim they only care about hu­mans, or could it be that they would, upon re­flec­tion, re­voke such state­ments?

And what about tran­shu­man­ism? I re­mem­ber that a cou­ple of years ago, I thought I had found a de­ci­sive ar­gu­ment against hu­man en­hance­ment. I thought it would likely lead to spe­ci­a­tion, and some­how the thought of that di­rectly im­plied that posthu­mans would treat the re­main­ing hu­mans badly, and so the whole thing be­came im­moral in my mind. Ob­vi­ously this is ab­surd; there is noth­ing wrong with spe­ci­a­tion per se, and if posthu­mans will be anti-speciesist, then the re­main­ing hu­mans would have noth­ing to fear! But given the speciesism in to­day’s so­ciety, it is all too un­der­stand­able that peo­ple would be con­cerned about this. If we imag­ine the huge ex­tent to which a posthu­man, or not to men­tion a strong AI, would be su­pe­rior com­pared to cur­rent hu­mans, isn’t that a bit like com­par­ing chick­ens to us?

A last pos­si­ble ob­jec­tion I can think of: Sup­pose one held the be­lief that group av­er­ages are what mat­ters, and that all mem­bers of the hu­man species de­serve equal pro­tec­tion be­cause of the group av­er­age for a crite­rion that is con­sid­ered rele­vant and that would, with­out the group av­er­age rule, deny moral con­sid­er­a­tion to some sen­tient hu­mans.

This defense too doesn’t work. Aside from seem­ing sus­pi­ciously ar­bi­trary, such a view would im­ply ab­surd con­clu­sions. A thought ex­per­i­ment for illus­tra­tion: A pig with a macro-mu­ta­tion is born, she de­vel­ops child-like in­tel­li­gence and the abil­ity to speak. Do we re­fuse to al­low her to live un­harmed—or even let her go to school—sim­ply be­cause she be­longs to a group (defined pre­sum­ably by snout shape, or DNA, or what­ever the crite­ria for “pig­ness” are) with an av­er­age that is too low?

Or imag­ine you are the head of an ar­chi­tec­ture bu­reau and look­ing to hire a new as­piring ar­chi­tect. Is toss­ing out an ap­pli­ca­tion writ­ten by a brilli­ant woman go­ing to in­crease the ex­pected suc­cess of your firm, as­sum­ing that women are, on av­er­age, less skil­led at spa­tial imag­i­na­tion than men? Surely not!

More­over, tak­ing group av­er­ages as our eth­i­cal crite­rion re­quires us to first define the rele­vant groups. Why even take species-groups in­stead of groups defined by skin color, weight or height? Why sin­gle out one prop­erty and not oth­ers?


Our speciesism is an an­thro­pocen­tric bias with­out any rea­son­able foun­da­tion. It would be com­pletely ar­bi­trary to give spe­cial con­sid­er­a­tion to a be­ing sim­ply be­cause of its species mem­ber­ship. Do­ing so would lead to a num­ber of im­pli­ca­tions that most peo­ple clearly re­ject. A strong case can be made that suffer­ing is bad in virtue of be­ing suffer­ing, re­gard­less of where it hap­pens. If the suffer­ing or deaths of non­hu­man an­i­mals de­serve no eth­i­cal con­sid­er­a­tion, then hu­man be­ings with the same rele­vant prop­er­ties (of which all plau­si­ble ones seem to come down to hav­ing similar lev­els of aware­ness) de­serve no in­trin­sic eth­i­cal con­sid­er­a­tion ei­ther, bar­ring speciesism.

As­sum­ing that we would feel un­com­fortable giv­ing jus­tifi­ca­tions or crite­ria for our scope of eth­i­cal con­cern that can analo­gously be used to defend racism or sex­ism, those not will­ing to bite the bul­let about tor­tur­ing ba­bies are forced by con­sid­er­a­tions of con­sis­tency to care about an­i­mal suffer­ing just as much as they care about hu­man suffer­ing.

Such a view leaves room for prob­a­bil­is­tic dis­count­ing in cases where we are em­piri­cally un­cer­tain whether be­ings are ca­pa­ble of suffer­ing, but we should be on the look­out for bi­ases in our as­sess­ments.

Edit: As Carl Shul­man has pointed out, dis­count­ing may also ap­ply for “in­ten­sity of sen­tience”, be­cause it seems at least plau­si­ble that shrimps (for in­stance), if they are sen­tient, can ex­pe­rience less suffer­ing than e.g. a whale.