Preliminary thoughts on moral weight
This post adapts some internal notes I wrote for the Open Philanthropy Project, but they are merely at a “brainstorming” stage, and do not express my “endorsed” views nor the views of the Open Philanthropy Project. This post is also written quickly and not polished or well-explained.
My 2017 Report on Consciousness and Moral Patienthood tried to address the question of “Which creatures are moral patients?” but it did little to address the question of “moral weight,” i.e. how to weigh the interests of different kinds of moral patients against each other:
For example: suppose we conclude that fishes, pigs, and humans are all moral patients, and we estimate that, for a fixed amount of money, we can (in expectation) dramatically improve the welfare of (a) 10,000 rainbow trout, (b) 1,000 pigs, or (c) 100 adult humans. In that situation, how should we compare the different options? This depends (among other things) on how much “moral weight” we give to the well-being of different kinds of moral patients.
Thus far, philosophers have said very little about moral weight (see below). In this post I lay out one approach to thinking about the question, in the hope that others might build on it or show it to be misguided.
Proposed setup
For the simplicity of a first-pass analysis of moral weight, let’s assume a variation on classical utilitarianism according to which the only thing that morally matters is the moment-by-moment character of a being’s conscious experience. So e.g. it doesn’t matter whether a being’s rights are respected/violated or its preferences are realized/thwarted, except insofar as those factors affect the moment-by-moment character of the being’s conscious experience, by causing pain/pleasure, happiness/sadness, etc.
Next, and again for simplicity’s sake, let’s talk only about the “typical” conscious experience of “typical” members of different species when undergoing various “canonical” positive and negative experiences, e.g. consuming species-appropriate food or having a nociceptor-dense section of skin damaged.
Given those assumptions, when we talk about the relative “moral weight” of different species, we mean to ask something like “How morally important is 10 seconds of a typical human’s experience of [some injury], compared to 10 seconds of a typical rainbow trout’s experience of [that same injury]?
For this exercise, I’ll separate “moral weight” from “probability of moral patienthood.” Naively, you could then multiply your best estimate of a species’ moral weight (using humans as the baseline of 1) by P(moral patienthood) to get the species’ “expected moral weight” (or whatever you want to call it). Then, to estimate an intervention’s potential benefit for a given species, you could multiply [expected moral weight of species] × [individuals of species affected] × [average # of minutes of conscious experience affected across those individuals] × [average magnitude of positive impact on those minutes of conscious experience].
However, I say “naively” because this doesn’t actually work, due to two-envelope effects.
Potential dimensions of moral weight
What features of a creature’s conscious experience might be relevant to the moral weight of its experiences? Below, I describe some possibilities that I previously mentioned in Appendix Z7 of my moral patienthood report.
Note that any of the features below could be (and in some cases, very likely are) hugely multidimensional. For simplicity, I’m going to assume a unidimensional characterization of them, e.g. what we’d get if we looked only at the principal component in a principal component analysis of a hugely multidimensional phenomenon.
Clock speed of consciousness
Perhaps animals vary in their “clock speed.” E.g. a hummingbird reacts to some things much faster than I ever could. If any of that is under conscious control, its “clock speed” of conscious experience seems like it should be faster than mine, meaning that, intuitively, it should have a greater number of subjective “moments of consciousness” per objective minute than I do.
In general, smaller animals probably have faster clock speeds than larger ones, for mechanical reasons:
The natural oscillation periods of most consciously controllable human body parts are greater than a tenth of a second. Because of this, the human brain has been designed with a matching reaction time of roughly a tenth of a second. As it costs more to have faster reaction times, there is little point in paying to react much faster than body parts can change position.
…the first resonant period of a bending cantilever, that is, a stick fixed at one end, is proportional to its length, at least if the stick’s thickness scales with its length. For example, sticks twice as long take twice as much time to complete each oscillation. Body size and reaction time are predictably related for animals today… (Hanson 2016, ch. 6)
My impression is that it’s a common intuition to value experience by its “subjective” duration rather than its “objective” duration, with no discount. So if a hummingbird’s clock speed is 3x as fast as mine, then all else equal, an objective minute of its conscious pleasure would be worth 3x an objective minute of my conscious pleasure.
Unities of consciousness
Philosophers and cognitive scientists debate how “unified” consciousness is, in various ways. Our normal conscious experience seems to many people to be pretty “unified” in various ways, though sometimes it feels less unified, for example when one goes “in and out of consciousness” during a restless night’s sleep, or when one engages in certain kinds of meditative practices.
Daniel Dennett suggests that animal conscious experience is radically less unified than human consciousness is, and cites this as a major reason he doesn’t give most animals much moral weight.
For convenience, I’ll use Bayne (2010)’s taxonomy of types of unity. He talks about subject unity, representational unity, and phenomenal unity — each of which has a “synchronic” (momentary) and “diachronic” (across time) aspect of unity.
Subject unity
Bayne explains:
My conscious states possess a certain kind of unity insofar as they are all mine; likewise, your conscious states possess that same kind of unity insofar as they are all yours. We can describe conscious states that are had by or belong to the same subject of experience as subject unified. Within subject unity we need to distinguish the unity provided by the subject of experience across time (diachronic unity) from that provided by the subject at a time (synchronic unity).
Representational unity
Bayne explains:
Let us say that conscious states are representationally unified to the degree that their contents are integrated with each other. Representational unity comes in a variety of forms. A particularly important form of representational unity concerns the integration of the contents of consciousness around perceptual objects—what we might call ‘object unity’. Perceptual features are not normally represented by isolated states of consciousness but are bound together in the form of integrated perceptual objects. This process is known as feature-binding. Feature-binding occurs not only within modalities but also between them, for we enjoy multimodal representations of perceptual objects.
I suspect many people wouldn’t treat representational unity as all that relevant to moral weight. E.g. there are humans with low representational unity of a sort (e.g. visual agnosics); are their sensory experiences less morally relevant as a result?
Phenomenal unity
Bayne explains:
Subject unity and representational unity capture important aspects of the unity of consciousness, but they don’t get to the heart of the matter. Consider again what it’s like to hear a rumba playing on the stereo whilst seeing a bartender mix a mojito. These two experiences might be subject unified insofar as they are both yours. They might also be representationally unified, for one might hear the rumba as coming from behind the bartender. But over and above these unities is a deeper and more primitive unity: the fact that these two experiences possess a conjoint experiential character. There is something it is like to hear the rumba, there is something it is like to see the bartender work, and there is something it is like to hear the rumba while seeing the bartender work. Any description of one’s overall state of consciousness that omitted the fact that these experiences are had together as components, parts, or elements of a single conscious state would be incomplete. Let us call this kind of unity — sometimes dubbed ‘co-consciousness’ — phenomenal unity.
Phenomenal unity is often in the background in discussions of the ‘stream’ or ‘field’ of consciousness. The stream metaphor is perhaps most naturally associated with the flow of consciousness — its unity through time — whereas the field metaphor more accurately captures the structure of consciousness at a time. We can say that what it is for a pair of experiences to occur within a single phenomenal field just is for them to enjoy a conjoint phenomenality — for there to be something it is like for the subject in question not only to have both experiences but to have them together. By contrast, simultaneous experiences that occur within distinct phenomenal fields do not share a conjoint phenomenal character.
Unity-independent intensity of valenced aspects of consciousness
A common report of those who take psychedelics is that, while “tripping,” their conscious experiences are “more intense” than they normally are. Similarly, different pains feel similar but have different intensities, e.g. when my stomach is upset, the intensity of my stomach pain waxes and wanes a fair bit, until it gradually fades to not being noticeable anymore. Same goes for conscious pleasures.
It’s possible such variations in intensity are entirely accounted for by their degrees of different kinds of unity, or by some other plausible feature(s) of moral weight, but maybe not. If there is some additional “intensity” variable for valenced aspects of conscious experience, it would seem a good candidate for affecting moral weight.
From my own experience, my guess is that I would endure ~10 seconds of the most intense pain I’ve ever experienced to avoid experiencing ~2 months of the lowest level of discomfort that I’d bother to call “discomfort.” That very low level of discomfort might suggest a lower bound on “intensity of valenced aspects of experience” that I intuitively morally care about, but “the most intense pain I’ve ever experienced” probably is not the highest intensity of valenced aspects of experience it is possible to experience — probably not even close. You could consider similar trades to get a sense for how much you intuitively value “intensity of experience,” at least in your own case.
Moral weights of various species
(This section edited slightly on 2020-02-26.)
If we thought about all this more carefully and collected as much relevant empirical data as possible, what moral weights might we assign to different species?
Whereas my probabilities of moral patienthood for any animal as complex as a crab only range from 0.2 − 1, the plausible ranges of moral weight seem like they could be much larger. I don’t feel like I’d be surprised if an omniscient being told me that my extrapolated values would assign pigs more moral weight than humans, and I don’t feel like I’d be surprised if an omniscient being told me my extrapolated values would assign pigs .0001 moral weight (assuming they were moral patients at all).
To illustrate how this might work, below are some guesses at some “plausible ranges of moral weight” (80% prediction interval) for a variety of species that someone might come to, if they had intuitions like those explained below.
Humans: 1 (baseline)
Chimpanzees: 0.001 − 2
Pigs: 0.0005 − 3.5
Cows: 0.0001 − 5
Chickens: 0.00005 − 10
Rainbow trout: 0.00001 − 13
Fruit fly: 0.000001 − 20
(But whenever you’re tempted to multiply such numbers by something, remember two-envelope effects!)
What intuitions might lead to something like these ranges?
An intuition to not place much value on “complex/higher-order” dimensions of moral weight — such as “fullness of self-awareness” or “capacity for reflecting on one’s holistic life satisfaction” — above and beyond the subjective duration and “intensity” of relatively “brute” pleasure/pain/happiness/sadness that (in humans) tends to accompany reflection, self-awareness, etc.
An intuition to care more about subject unity and phenomenal unity than about such higher-order dimensions of moral weight.
An intuition to care most of all about clock speed and experience intensity (if intensity is distinct from unity).
Intuitions that if the animal species listed above are conscious, they:
have very little of the higher-order dimensions of conscious experience,
have faster clock speeds than humans (the smaller the faster),
probably have lower “intensity” of experience, but might actually have somewhat greater intensity of experience (e.g. because they aren’t distracted by linguistic thought),
have moderately less subject unity and phenomenal unity, especially of the diachronic sort.
Under these intuitions, the low end of the ranges above could be explained by the possibility that intensity of conscious experience diminishes dramatically with brain complexity and flexibility, while the high end of the ranges above could be explained by the possibility concerning faster clock speeds for smaller animals, the possibility of lesser unity in non-human animals (which one might value at >1x for the same reason one might value a dually-conscious split-brain patient at ~2x), and the possibility for greater intensity of experience in simpler animals.
Other writings on moral weight
Brian Tomasik: Is animal suffering less bad than human suffering?; Which computations do I care about?; Is brain size morally relevant?; Do Smaller Animals Have Faster Subjective Experiences?; Two-Envelopes Problem for Uncertainty about Brain-Size Valuation and Other Moral Questions
Nick Bostrom: Quantity of Experience
Kevin Wong: Counting Animals
Oscar Horta: Questions of Priority and Interspecies Comparisons of Happiness
Adler et al., Would you choose to be happy? Tradeoffs between happiness and the other dimensions of life in a large population survey
- Killing the ants by 7 Feb 2021 23:16 UTC; 228 points) (EA Forum;
- 2018 Review: Voting Results! by 24 Jan 2020 2:00 UTC; 135 points) (
- The Subjective Experience of Time: Welfare Implications by 27 Jul 2020 13:24 UTC; 113 points) (EA Forum;
- How good is The Humane League compared to the Against Malaria Foundation? by 29 Apr 2020 13:40 UTC; 106 points) (EA Forum;
- Comparisons of Capacity for Welfare and Moral Status Across Species by 18 May 2020 0:42 UTC; 101 points) (EA Forum;
- Corporate campaigns for chicken welfare are 10,000 times as effective as GiveWell’s Maximum Impact Fund? by 28 Jul 2022 8:22 UTC; 78 points) (EA Forum;
- Killing the ants by 7 Feb 2021 23:17 UTC; 76 points) (
- Solution to the two envelopes problem for moral weights by 19 Feb 2024 0:15 UTC; 64 points) (EA Forum;
- Comparison between the hedonic utility of human life and poultry living time by 8 Jun 2022 7:52 UTC; 47 points) (EA Forum;
- How do you compare human and animal suffering? by 29 Apr 2021 18:19 UTC; 29 points) (EA Forum;
- 7 Oct 2023 16:38 UTC; 24 points) 's comment on Our planned allocation to GiveWell’s recommendations for the next few years by (EA Forum;
- 8 May 2024 15:22 UTC; 23 points) 's comment on AMA: Lewis Bollard, Program Director of Farm Animal Welfare at OpenPhil by (EA Forum;
- 22 Nov 2023 15:55 UTC; 20 points) 's comment on Open Phil Should Allocate Most Neartermist Funding to Animal Welfare by (EA Forum;
- 28 Nov 2023 7:08 UTC; 15 points) 's comment on Open Phil Should Allocate Most Neartermist Funding to Animal Welfare by (EA Forum;
- Moral weights for various species and distributions by 3 Jul 2022 18:19 UTC; 15 points) (EA Forum;
- 15 Jul 2019 1:37 UTC; 9 points) 's comment on Invertebrate Welfare Cause Profile by (EA Forum;
- Solution to the two envelopes problem for moral weights by 19 Feb 2024 0:15 UTC; 9 points) (
- 5 Dec 2018 20:50 UTC; 4 points) 's comment on Which animals need the most help from the animal advocacy movement? by (EA Forum;
- 23 May 2020 1:03 UTC; 3 points) 's comment on Comparisons of Capacity for Welfare and Moral Status Across Species by (EA Forum;
- 23 May 2020 1:52 UTC; 3 points) 's comment on Comparisons of Capacity for Welfare and Moral Status Across Species by (EA Forum;
- 21 Dec 2022 23:12 UTC; 3 points) 's comment on A Case for Voluntary Abortion Reduction by (EA Forum;
- 10 Jun 2022 17:12 UTC; 3 points) 's comment on Comparison between the hedonic utility of human life and poultry living time by (EA Forum;
- 8 May 2024 16:27 UTC; 3 points) 's comment on AMA: Lewis Bollard, Program Director of Farm Animal Welfare at OpenPhil by (EA Forum;
- 8 May 2020 7:07 UTC; 2 points) 's comment on MichaelA’s Quick takes by (EA Forum;
- 9 Nov 2023 18:20 UTC; 2 points) 's comment on Rethink Priorities’ Cross-Cause Cost-Effectiveness Model: Introduction and Overview by (EA Forum;
- 10 Jun 2022 15:15 UTC; 1 point) 's comment on Comparison between the hedonic utility of human life and poultry living time by (EA Forum;
- 9 Jun 2022 17:33 UTC; 1 point) 's comment on Comparison between the hedonic utility of human life and poultry living time by (EA Forum;
- 7 Jul 2022 7:57 UTC; 1 point) 's comment on Moral weights for various species and distributions by (EA Forum;
It’s fairly rare (lately) that I’ve read something that meaningfully shifted my distribution of “what sorts of moral and/or consciousness theories I’m likely to subscribe to after more learning/reflection.”
I think this is probably mostly has to do with me being in a valley where there’s a lot of “relatively easy” concepts I’ve already learned, and then [potentially] harder concepts that I’d have to put a lot of work into understanding. (I did kinda bounce of Luke’s longer post on consciousness, although I think that had more to do with length than being over-my-head)
But this post seemed well targeted towards 2018_raemon’s background. I had thought about high-clockspeed being relevant for the moral relevance of digital-minds, but somehow hadn’t considered that this might also make hummingbirds more morally relevant than humans.
(To be clear, all of this is hedged with massive uncertainty, and I currently don’t expect to end up believing hummingbirds are more relevant. But it felt like a big shift in how I carved up the space of possibilities)
Seconding Ray. This was a bunch of important hypotheses about consciousness I had never heard of.