Preliminary thoughts on moral weight

This post adapts some internal notes I wrote for the Open Philanthropy Project, but they are merely at a “brainstorming” stage, and do not express my “endorsed” views nor the views of the Open Philanthropy Project. This post is also written quickly and not polished or well-explained.

My 2017 Report on Consciousness and Moral Patienthood tried to address the question of “Which creatures are moral patients?” but it did little to address the question of “moral weight,” i.e. how to weigh the interests of different kinds of moral patients against each other:

For example: suppose we conclude that fishes, pigs, and humans are all moral patients, and we estimate that, for a fixed amount of money, we can (in expectation) dramatically improve the welfare of (a) 10,000 rainbow trout, (b) 1,000 pigs, or (c) 100 adult humans. In that situation, how should we compare the different options? This depends (among other things) on how much “moral weight” we give to the well-being of different kinds of moral patients.

Thus far, philosophers have said very little about moral weight (see below). In this post I lay out one approach to thinking about the question, in the hope that others might build on it or show it to be misguided.

Proposed setup

For the simplicity of a first-pass analysis of moral weight, let’s assume a variation on classical utilitarianism according to which the only thing that morally matters is the moment-by-moment character of a being’s conscious experience. So e.g. it doesn’t matter whether a being’s rights are respected/​violated or its preferences are realized/​thwarted, except insofar as those factors affect the moment-by-moment character of the being’s conscious experience, by causing pain/​pleasure, happiness/​sadness, etc.

Next, and again for simplicity’s sake, let’s talk only about the “typical” conscious experience of “typical” members of different species when undergoing various “canonical” positive and negative experiences, e.g. consuming species-appropriate food or having a nociceptor-dense section of skin damaged.

Given those assumptions, when we talk about the relative “moral weight” of different species, we mean to ask something like “How morally important is 10 seconds of a typical human’s experience of [some injury], compared to 10 seconds of a typical rainbow trout’s experience of [that same injury]?

For this exercise, I’ll separate “moral weight” from “probability of moral patienthood.” Naively, you could then multiply your best estimate of a species’ moral weight (using humans as the baseline of 1) by P(moral patienthood) to get the species’ “expected moral weight” (or whatever you want to call it). Then, to estimate an intervention’s potential benefit for a given species, you could multiply [expected moral weight of species] × [individuals of species affected] × [average # of minutes of conscious experience affected across those individuals] × [average magnitude of positive impact on those minutes of conscious experience].

However, I say “naively” because this doesn’t actually work, due to two-envelope effects.

Potential dimensions of moral weight

What features of a creature’s conscious experience might be relevant to the moral weight of its experiences? Below, I describe some possibilities that I previously mentioned in Appendix Z7 of my moral patienthood report.

Note that any of the features below could be (and in some cases, very likely are) hugely multidimensional. For simplicity, I’m going to assume a unidimensional characterization of them, e.g. what we’d get if we looked only at the principal component in a principal component analysis of a hugely multidimensional phenomenon.

Clock speed of consciousness

Perhaps animals vary in their “clock speed.” E.g. a hummingbird reacts to some things much faster than I ever could. If any of that is under conscious control, its “clock speed” of conscious experience seems like it should be faster than mine, meaning that, intuitively, it should have a greater number of subjective “moments of consciousness” per objective minute than I do.

In general, smaller animals probably have faster clock speeds than larger ones, for mechanical reasons:

The natural oscillation periods of most consciously controllable human body parts are greater than a tenth of a second. Because of this, the human brain has been designed with a matching reaction time of roughly a tenth of a second. As it costs more to have faster reaction times, there is little point in paying to react much faster than body parts can change position.
…the first resonant period of a bending cantilever, that is, a stick fixed at one end, is proportional to its length, at least if the stick’s thickness scales with its length. For example, sticks twice as long take twice as much time to complete each oscillation. Body size and reaction time are predictably related for animals today… (Hanson 2016, ch. 6)

My impression is that it’s a common intuition to value experience by its “subjective” duration rather than its “objective” duration, with no discount. So if a hummingbird’s clock speed is 3x as fast as mine, then all else equal, an objective minute of its conscious pleasure would be worth 3x an objective minute of my conscious pleasure.

Unities of consciousness

Philosophers and cognitive scientists debate how “unified” consciousness is, in various ways. Our normal conscious experience seems to many people to be pretty “unified” in various ways, though sometimes it feels less unified, for example when one goes “in and out of consciousness” during a restless night’s sleep, or when one engages in certain kinds of meditative practices.

Daniel Dennett suggests that animal conscious experience is radically less unified than human consciousness is, and cites this as a major reason he doesn’t give most animals much moral weight.

For convenience, I’ll use Bayne (2010)’s taxonomy of types of unity. He talks about subject unity, representational unity, and phenomenal unity — each of which has a “synchronic” (momentary) and “diachronic” (across time) aspect of unity.

Subject unity

Bayne explains:

My conscious states possess a certain kind of unity insofar as they are all mine; likewise, your conscious states possess that same kind of unity insofar as they are all yours. We can describe conscious states that are had by or belong to the same subject of experience as subject unified. Within subject unity we need to distinguish the unity provided by the subject of experience across time (diachronic unity) from that provided by the subject at a time (synchronic unity).

Representational unity

Bayne explains:

Let us say that conscious states are representationally unified to the degree that their contents are integrated with each other. Representational unity comes in a variety of forms. A particularly important form of representational unity concerns the integration of the contents of consciousness around perceptual objects—what we might call ‘object unity’. Perceptual features are not normally represented by isolated states of consciousness but are bound together in the form of integrated perceptual objects. This process is known as feature-binding. Feature-binding occurs not only within modalities but also between them, for we enjoy multimodal representations of perceptual objects.

I suspect many people wouldn’t treat representational unity as all that relevant to moral weight. E.g. there are humans with low representational unity of a sort (e.g. visual agnosics); are their sensory experiences less morally relevant as a result?

Phenomenal unity

Bayne explains:

Subject unity and representational unity capture important aspects of the unity of consciousness, but they don’t get to the heart of the matter. Consider again what it’s like to hear a rumba playing on the stereo whilst seeing a bartender mix a mojito. These two experiences might be subject unified insofar as they are both yours. They might also be representationally unified, for one might hear the rumba as coming from behind the bartender. But over and above these unities is a deeper and more primitive unity: the fact that these two experiences possess a conjoint experiential character. There is something it is like to hear the rumba, there is something it is like to see the bartender work, and there is something it is like to hear the rumba while seeing the bartender work. Any description of one’s overall state of consciousness that omitted the fact that these experiences are had together as components, parts, or elements of a single conscious state would be incomplete. Let us call this kind of unity — sometimes dubbed ‘co-consciousness’ — phenomenal unity.
Phenomenal unity is often in the background in discussions of the ‘stream’ or ‘field’ of consciousness. The stream metaphor is perhaps most naturally associated with the flow of consciousness — its unity through time — whereas the field metaphor more accurately captures the structure of consciousness at a time. We can say that what it is for a pair of experiences to occur within a single phenomenal field just is for them to enjoy a conjoint phenomenality — for there to be something it is like for the subject in question not only to have both experiences but to have them together. By contrast, simultaneous experiences that occur within distinct phenomenal fields do not share a conjoint phenomenal character.

Unity-independent intensity of valenced aspects of consciousness

A common report of those who take psychedelics is that, while “tripping,” their conscious experiences are “more intense” than they normally are. Similarly, different pains feel similar but have different intensities, e.g. when my stomach is upset, the intensity of my stomach pain waxes and wanes a fair bit, until it gradually fades to not being noticeable anymore. Same goes for conscious pleasures.

It’s possible such variations in intensity are entirely accounted for by their degrees of different kinds of unity, or by some other plausible feature(s) of moral weight, but maybe not. If there is some additional “intensity” variable for valenced aspects of conscious experience, it would seem a good candidate for affecting moral weight.

From my own experience, my guess is that I would endure ~10 seconds of the most intense pain I’ve ever experienced to avoid experiencing ~2 months of the lowest level of discomfort that I’d bother to call “discomfort.” That very low level of discomfort might suggest a lower bound on “intensity of valenced aspects of experience” that I intuitively morally care about, but “the most intense pain I’ve ever experienced” probably is not the highest intensity of valenced aspects of experience it is possible to experience — probably not even close. You could consider similar trades to get a sense for how much you intuitively value “intensity of experience,” at least in your own case.

Moral weights of various species

(This section edited slightly on 2020-02-26.)

If we thought about all this more carefully and collected as much relevant empirical data as possible, what moral weights might we assign to different species?

Whereas my probabilities of moral patienthood for any animal as complex as a crab only range from 0.2 − 1, the plausible ranges of moral weight seem like they could be much larger. I don’t feel like I’d be surprised if an omniscient being told me that my extrapolated values would assign pigs more moral weight than humans, and I don’t feel like I’d be surprised if an omniscient being told me my extrapolated values would assign pigs .0001 moral weight (assuming they were moral patients at all).

To illustrate how this might work, below are some guesses at some “plausible ranges of moral weight” (80% prediction interval) for a variety of species that someone might come to, if they had intuitions like those explained below.

  • Humans: 1 (baseline)

  • Chimpanzees: 0.001 − 2

  • Pigs: 0.0005 − 3.5

  • Cows: 0.0001 − 5

  • Chickens: 0.00005 − 10

  • Rainbow trout: 0.00001 − 13

  • Fruit fly: 0.000001 − 20

(But whenever you’re tempted to multiply such numbers by something, remember two-envelope effects!)

What intuitions might lead to something like these ranges?

  • An intuition to not place much value on “complex/​higher-order” dimensions of moral weight — such as “fullness of self-awareness” or “capacity for reflecting on one’s holistic life satisfaction” — above and beyond the subjective duration and “intensity” of relatively “brute” pleasure/​pain/​happiness/​sadness that (in humans) tends to accompany reflection, self-awareness, etc.

  • An intuition to care more about subject unity and phenomenal unity than about such higher-order dimensions of moral weight.

  • An intuition to care most of all about clock speed and experience intensity (if intensity is distinct from unity).

  • Intuitions that if the animal species listed above are conscious, they:

    • have very little of the higher-order dimensions of conscious experience,

    • have faster clock speeds than humans (the smaller the faster),

    • probably have lower “intensity” of experience, but might actually have somewhat greater intensity of experience (e.g. because they aren’t distracted by linguistic thought),

    • have moderately less subject unity and phenomenal unity, especially of the diachronic sort.

Under these intuitions, the low end of the ranges above could be explained by the possibility that intensity of conscious experience diminishes dramatically with brain complexity and flexibility, while the high end of the ranges above could be explained by the possibility concerning faster clock speeds for smaller animals, the possibility of lesser unity in non-human animals (which one might value at >1x for the same reason one might value a dually-conscious split-brain patient at ~2x), and the possibility for greater intensity of experience in simpler animals.


Other writings on moral weight