The Ethical Status of Non-human Animals

There’s been some discussion on this site about vegetarianism previously, although less than I expected. It’s a complicated topic, so I want to focus on a critical sub-issue: within a consequentialist/​utilitarian framework, what should be the status of non-human animals? Do only humans matter? If non-human animals matter only a little, just how much do they matter?

I argue that species-specific weighting factors have no place in our moral calculus. If two minds experience the same sort of stimulus, the species of those minds shouldn’t affect how good or bad we believe that to be. I owe the line of argument I’ll be sketching to Peter Singer’s work. His book Practical Ethics is the best statement of the case that I’m aware of.

Front-loaded definitions and summary:

  • Self-aware: A self-aware mind is one that understands that it exists and that it persists through time.
  • Sentience: A sentient mind is one that has subjective experiences, such as pleasure and pain. I assume that self-awareness subsumes sentience (i.e. all self-aware minds are also sentient, but not vice versa).
  • Person: A self-aware mind.
  • A human may be alive but non-sentient, due to injury or birth defects.
  • Humans may be sentient but not self-aware, due to injury, birth defect or infancy.
  • Non-human persons are possible: hypothetically, aliens and AIs; controversially, non-human great apes.
  • Many non-human animals are sentient, many are not.
  • Utilitarian ethics involve moral calculus: summing the impacts of an action (or some proxy for them, such as preferences) on all minds.
  • When performing this calculus, do sentient (but non-self aware) minds count at all? If so, do they count as much as persons?
  • If they count for zero, there’s no ethical problem with secretly torturing puppies, just for fun.
  • We’re tempted to believe that sentient minds count for something, but less than persons.
  • I think this is just a cover for what we’re really tempted to believe: humans count for more than non-humans, not because of the character of our minds, but simply because of the species we belong to.
  • Historically, allowing your ethical system to arbitrarily promote the interests of those similar to you has led to very bad results.

Personhood and Sentience

Cognitively healthy mature humans have minds that differ in many ways from the other species on Earth. The most striking is probably the level of abstraction we are able to think at. A related ability is that we are able to form detailed plans far into the future. We also have a sense of self that persists through time.

Let’s call a mind that is fully self-aware a person. Now, whether or not there are any non-human persons on Earth today, non-human persons are certainly possible. They might include aliens, artificial intelligences, or extinct ancestral species. There are also humans that are not persons; due to brain damage, birth defects, or perhaps simply infancy[1]. Minds that are not self-aware in this way, but are able to have subjective experiences, let’s call sentient.

Consequentialism/​Utilitarianism

This is an abridged summary of consequentialism/​utilitarianism, included for completeness. It’s designed to tell you what I’m on about if you’ve never heard of this before. For a full argument in support of this framework, see elsewhere.

A consequentialist ethical framework is one in which the ethical status of an action is judged by the “goodness” of the possible worlds it creates, weighted by the probability of those outcomes[2]. Nailing down a “goodness function” (usually called a utility function) that returns an answer [0,1] for the desirability of a possible world is understandably difficult. But the parts that are most difficult also seldom matter. The basics are easy to agree upon. Many of our subjective experiences are either sharply good or sharply bad. Roughly, a world in which minds experience lots of good things and few bad things should be preferable to a world in which minds have lots of negative experiences and few positive experiences.

In particular, it’s obvious that pain is bad, all else being equal. A little pain can be a worthwhile price for good experiences later, but it’s considered a price precisely because we’d prefer not to pay it. It’s a negative on the ledger. So, an action which reduces the amount of pain in the world, without doing sufficient other harms to balance it out, would be judged “ethical”.

The question is: should we only consider the minds of persons—self-conscious minds that understand they are a mind with a past, present, and future? Or should we also consider merely sentient minds? And if we do consider sentient minds, should we down-weight them in our utility calculation?

Do the experiences of merely sentient minds receive a weight of 0, 1, or somewhere in between?

How much do sentient non-persons count?

Be careful before answering “0”. This implies that a person can never treat a merely sentient mind unethically, except in violation of the preferences of other persons. Torturing puppies for passing amusement would be ethically A-OK, so long as you keep it quiet in front of other persons who might mind. I’m not a moral realist—I don’t believe that when I say “X is unethical”, I’m describing a property of objective reality. I think it’s more like deduction given axioms. So if your utility function really is such that you ascribe 0 weight to the suffering of merely sentient minds, I can’t say you’re objectively correct or incorrect. I doubt many people can honestly claim this, though.

Is a 1.0 weight not equally ridiculous, though? Let’s take a simple negative stimulus, pain. Imagine you had to choose between possible worlds in which either a cognitively normal adult human or a cognitively normal pig received a small shallow cut that crossed a section of skin connected to approximately the same number of nerves. The wound will be delivered with a sterile instrument and promptly cleaned and covered, so the only relevant thing here is the pain. The pig will also feel some fear, but let’s ignore that.

You might claim that a utility function that didn’t prefer that the pig feel the pain was hopelessly broken. But remember that the weight we’re talking about applies to kinds of minds, not members of species. If you had to decide between a cognitively normal adult human, and a human that had experienced some brain damage such that they were merely sentient, would the decision be so easy? How about if you had to decide between a cognitively normal adult human, and a human infant?

The problem with speciesism

If you want to claim that causing the pig pain is preferable to causing a sentient but not self-aware human pain, you’re going to have to make your utility function species-sensitive. You’re going to have to claim that humans deserve special moral consideration, and not because of any characteristics of their minds. Simply because they’re human.

It’s easy to go wild with hypotheticals here. What about an alien race that was (for some unimaginable reason) just like us? What about humanoid robots with minds indistinguishable from ours?

To me it’s quite obvious that species-membership, by itself, shouldn’t be morally relevant. But it’s plain that this idea is unintuitive, and I don’t think it’s a huge mystery why.

We have an emotional knee-jerk reaction to consider harm done to beings similar to ourselves as much worse than harm done to beings different from us. That’s why the idea that a pig’s pain might matter just as much as a human’s makes you twitch. But you mustn’t let that twitch be the deciding factor.

Well, that’s not precisely correct: again, there’s no ethical realism. There’s nothing in observable reality that says that one utility function is better than another. So you could just throw in a weighting for non-human animals, satisfy your emotional knee-jerk reaction, and be done with it. However, that similarity metric once made people twitch at the idea that the pain of a person with a different skin pigmentation mattered as much as theirs.

If you listen to that twitch, that instinct that those similar to you matter more, you’re following an ethical algorithm that would have led you to the wrong answer on most of the major ethical questions through history. Or at least, the ones we’ve since changed our minds about.

If I’m happy to arbitrarily weight non-human animals lower, just because I don’t like the implications of considering their interests equal, I would have been free to do the same when considering how much the experiences of out-group persons should matter. When deciding my values, I want to be using an algorithm that would’ve gotten the right answer on slavery, even given 19th century inputs.

Now, having said that the experiences of merely sentient minds matter, I should reiterate that there are lots of kinds of joys and sufferings not relevant to them. Because a rabbit doesn’t understand its continued existence, it’s not wrong to kill it suddenly and painlessly, out of sight/​smell/​earshot of other rabbits. There are no circumstances in which killing a person doesn’t involve serious negative utility. Persons have plans and aspirations. When I consider what would be bad about being murdered, the momentary fear and pain barely rank. Similarly, I think it’s possible to please a person more deeply than a merely sentient mind. But when it comes to a simple stimulus like pain, which both minds feel similarly, it’s just as bad for both of them.

When I changed my mind about this, I hadn’t yet decided to particularly care about how ethical I was. This kept me from having to say “well, I’m not allowed to believe this, because then I’d have to be vegetarian, and hell no!”. I later did decide to be more ethical, but doing it in two stages like that seemed to make changing my mind less traumatic.


[1] I haven’t really studied the evidence about infant cognition. It’s possible infants are fully self-conscious (as in, have an understanding that they are a mind plus a body that persists through time), but it seems unlikely to me.

[2] Actually I seldom see it stated probabilistically like this. I think this is surely just an oversight? If you have to choose between pushing a button that will save a life with probability 0.99, and cost a life with probability 0.01, surely it’s not unethical after the fact if you got unlucky.