Preliminary thoughts on moral weight

This post adapts some in­ter­nal notes I wrote for the Open Philan­thropy Pro­ject, but they are merely at a “brain­storm­ing” stage, and do not ex­press my “en­dorsed” views nor the views of the Open Philan­thropy Pro­ject. This post is also writ­ten quickly and not pol­ished or well-ex­plained.

My 2017 Re­port on Con­scious­ness and Mo­ral Pa­tient­hood tried to ad­dress the ques­tion of “Which crea­tures are moral pa­tients?” but it did lit­tle to ad­dress the ques­tion of “moral weight,” i.e. how to weigh the in­ter­ests of differ­ent kinds of moral pa­tients against each other:

For ex­am­ple: sup­pose we con­clude that fishes, pigs, and hu­mans are all moral pa­tients, and we es­ti­mate that, for a fixed amount of money, we can (in ex­pec­ta­tion) dra­mat­i­cally im­prove the welfare of (a) 10,000 rain­bow trout, (b) 1,000 pigs, or (c) 100 adult hu­mans. In that situ­a­tion, how should we com­pare the differ­ent op­tions? This de­pends (among other things) on how much “moral weight” we give to the well-be­ing of differ­ent kinds of moral pa­tients.

Thus far, philoso­phers have said very lit­tle about moral weight (see be­low). In this post I lay out one ap­proach to think­ing about the ques­tion, in the hope that oth­ers might build on it or show it to be mis­guided.

Pro­posed setup

For the sim­plic­ity of a first-pass anal­y­sis of moral weight, let’s as­sume a vari­a­tion on clas­si­cal util­i­tar­i­anism ac­cord­ing to which the only thing that morally mat­ters is the mo­ment-by-mo­ment char­ac­ter of a be­ing’s con­scious ex­pe­rience. So e.g. it doesn’t mat­ter whether a be­ing’s rights are re­spected/​vi­o­lated or its prefer­ences are re­al­ized/​thwarted, ex­cept in­so­far as those fac­tors af­fect the mo­ment-by-mo­ment char­ac­ter of the be­ing’s con­scious ex­pe­rience, by caus­ing pain/​plea­sure, hap­piness/​sad­ness, etc.

Next, and again for sim­plic­ity’s sake, let’s talk only about the “typ­i­cal” con­scious ex­pe­rience of “typ­i­cal” mem­bers of differ­ent species when un­der­go­ing var­i­ous “canon­i­cal” pos­i­tive and nega­tive ex­pe­riences, e.g. con­sum­ing species-ap­pro­pri­ate food or hav­ing a no­ci­cep­tor-dense sec­tion of skin dam­aged.

Given those as­sump­tions, when we talk about the rel­a­tive “moral weight” of differ­ent species, we mean to ask some­thing like “How morally im­por­tant is 10 sec­onds of a typ­i­cal hu­man’s ex­pe­rience of [some in­jury], com­pared to 10 sec­onds of a typ­i­cal rain­bow trout’s ex­pe­rience of [that same in­jury]?

For this ex­er­cise, I’ll sep­a­rate “moral weight” from “prob­a­bil­ity of moral pa­tient­hood.” Naively, you could then mul­ti­ply your best es­ti­mate of a species’ moral weight (us­ing hu­mans as the baseline of 1) by P(moral pa­tient­hood) to get the species’ “ex­pected moral weight” (or what­ever you want to call it). Then, to es­ti­mate an in­ter­ven­tion’s po­ten­tial benefit for a given species, you could mul­ti­ply [ex­pected moral weight of species] × [in­di­vi­d­u­als of species af­fected] × [av­er­age # of min­utes of con­scious ex­pe­rience af­fected across those in­di­vi­d­u­als] × [av­er­age mag­ni­tude of pos­i­tive im­pact on those min­utes of con­scious ex­pe­rience].

How­ever, I say “naively” be­cause this doesn’t ac­tu­ally work, due to two-en­velope effects.

Po­ten­tial di­men­sions of moral weight

What fea­tures of a crea­ture’s con­scious ex­pe­rience might be rele­vant to the moral weight of its ex­pe­riences? Below, I de­scribe some pos­si­bil­ities that I pre­vi­ously men­tioned in Ap­pendix Z7 of my moral pa­tient­hood re­port.

Note that any of the fea­tures be­low could be (and in some cases, very likely are) hugely mul­ti­di­men­sional. For sim­plic­ity, I’m go­ing to as­sume a uni­di­men­sional char­ac­ter­i­za­tion of them, e.g. what we’d get if we looked only at the prin­ci­pal com­po­nent in a prin­ci­pal com­po­nent anal­y­sis of a hugely mul­ti­di­men­sional phe­nomenon.

Clock speed of consciousness

Per­haps an­i­mals vary in their “clock speed.” E.g. a hum­ming­bird re­acts to some things much faster than I ever could. If any of that is un­der con­scious con­trol, its “clock speed” of con­scious ex­pe­rience seems like it should be faster than mine, mean­ing that, in­tu­itively, it should have a greater num­ber of sub­jec­tive “mo­ments of con­scious­ness” per ob­jec­tive minute than I do.

In gen­eral, smaller an­i­mals prob­a­bly have faster clock speeds than larger ones, for me­chan­i­cal rea­sons:

The nat­u­ral os­cilla­tion pe­ri­ods of most con­sciously con­trol­lable hu­man body parts are greater than a tenth of a sec­ond. Be­cause of this, the hu­man brain has been de­signed with a match­ing re­ac­tion time of roughly a tenth of a sec­ond. As it costs more to have faster re­ac­tion times, there is lit­tle point in pay­ing to re­act much faster than body parts can change po­si­tion.
…the first res­o­nant pe­riod of a bend­ing can­tilever, that is, a stick fixed at one end, is pro­por­tional to its length, at least if the stick’s thick­ness scales with its length. For ex­am­ple, sticks twice as long take twice as much time to com­plete each os­cilla­tion. Body size and re­ac­tion time are pre­dictably re­lated for an­i­mals to­day… (Han­son 2016, ch. 6)

My im­pres­sion is that it’s a com­mon in­tu­ition to value ex­pe­rience by its “sub­jec­tive” du­ra­tion rather than its “ob­jec­tive” du­ra­tion, with no dis­count. So if a hum­ming­bird’s clock speed is 3x as fast as mine, then all else equal, an ob­jec­tive minute of its con­scious plea­sure would be worth 3x an ob­jec­tive minute of my con­scious plea­sure.

Uni­ties of consciousness

Philoso­phers and cog­ni­tive sci­en­tists de­bate how “unified” con­scious­ness is, in var­i­ous ways. Our nor­mal con­scious ex­pe­rience seems to many peo­ple to be pretty “unified” in var­i­ous ways, though some­times it feels less unified, for ex­am­ple when one goes “in and out of con­scious­ness” dur­ing a restless night’s sleep, or when one en­gages in cer­tain kinds of med­i­ta­tive prac­tices.

Daniel Den­nett sug­gests that an­i­mal con­scious ex­pe­rience is rad­i­cally less unified than hu­man con­scious­ness is, and cites this as a ma­jor rea­son he doesn’t give most an­i­mals much moral weight.

For con­ve­nience, I’ll use Bayne (2010)’s tax­on­omy of types of unity. He talks about sub­ject unity, rep­re­sen­ta­tional unity, and phe­nom­e­nal unity — each of which has a “syn­chronic” (mo­men­tary) and “di­achronic” (across time) as­pect of unity.

Sub­ject unity

Bayne ex­plains:

My con­scious states pos­sess a cer­tain kind of unity in­so­far as they are all mine; like­wise, your con­scious states pos­sess that same kind of unity in­so­far as they are all yours. We can de­scribe con­scious states that are had by or be­long to the same sub­ject of ex­pe­rience as sub­ject unified. Within sub­ject unity we need to dis­t­in­guish the unity pro­vided by the sub­ject of ex­pe­rience across time (di­achronic unity) from that pro­vided by the sub­ject at a time (syn­chronic unity).

Rep­re­sen­ta­tional unity

Bayne ex­plains:

Let us say that con­scious states are rep­re­sen­ta­tion­ally unified to the de­gree that their con­tents are in­te­grated with each other. Rep­re­sen­ta­tional unity comes in a va­ri­ety of forms. A par­tic­u­larly im­por­tant form of rep­re­sen­ta­tional unity con­cerns the in­te­gra­tion of the con­tents of con­scious­ness around per­cep­tual ob­jects—what we might call ‘ob­ject unity’. Per­cep­tual fea­tures are not nor­mally rep­re­sented by iso­lated states of con­scious­ness but are bound to­gether in the form of in­te­grated per­cep­tual ob­jects. This pro­cess is known as fea­ture-bind­ing. Fea­ture-bind­ing oc­curs not only within modal­ities but also be­tween them, for we en­joy mul­ti­modal rep­re­sen­ta­tions of per­cep­tual ob­jects.

I sus­pect many peo­ple wouldn’t treat rep­re­sen­ta­tional unity as all that rele­vant to moral weight. E.g. there are hu­mans with low rep­re­sen­ta­tional unity of a sort (e.g. vi­sual ag­nosics); are their sen­sory ex­pe­riences less morally rele­vant as a re­sult?

Phenom­e­nal unity

Bayne ex­plains:

Sub­ject unity and rep­re­sen­ta­tional unity cap­ture im­por­tant as­pects of the unity of con­scious­ness, but they don’t get to the heart of the mat­ter. Con­sider again what it’s like to hear a rumba play­ing on the stereo whilst see­ing a bar­tender mix a mo­jito. Th­ese two ex­pe­riences might be sub­ject unified in­so­far as they are both yours. They might also be rep­re­sen­ta­tion­ally unified, for one might hear the rumba as com­ing from be­hind the bar­tender. But over and above these uni­ties is a deeper and more prim­i­tive unity: the fact that these two ex­pe­riences pos­sess a con­joint ex­pe­ri­en­tial char­ac­ter. There is some­thing it is like to hear the rumba, there is some­thing it is like to see the bar­tender work, and there is some­thing it is like to hear the rumba while see­ing the bar­tender work. Any de­scrip­tion of one’s over­all state of con­scious­ness that omit­ted the fact that these ex­pe­riences are had to­gether as com­po­nents, parts, or el­e­ments of a sin­gle con­scious state would be in­com­plete. Let us call this kind of unity — some­times dubbed ‘co-con­scious­ness’ — phe­nom­e­nal unity.
Phenom­e­nal unity is of­ten in the back­ground in dis­cus­sions of the ‘stream’ or ‘field’ of con­scious­ness. The stream metaphor is per­haps most nat­u­rally as­so­ci­ated with the flow of con­scious­ness — its unity through time — whereas the field metaphor more ac­cu­rately cap­tures the struc­ture of con­scious­ness at a time. We can say that what it is for a pair of ex­pe­riences to oc­cur within a sin­gle phe­nom­e­nal field just is for them to en­joy a con­joint phe­nom­e­nal­ity — for there to be some­thing it is like for the sub­ject in ques­tion not only to have both ex­pe­riences but to have them to­gether. By con­trast, si­mul­ta­neous ex­pe­riences that oc­cur within dis­tinct phe­nom­e­nal fields do not share a con­joint phe­nom­e­nal char­ac­ter.

Unity-in­de­pen­dent in­ten­sity of valenced as­pects of consciousness

A com­mon re­port of those who take psychedelics is that, while “trip­ping,” their con­scious ex­pe­riences are “more in­tense” than they nor­mally are. Similarly, differ­ent pains feel similar but have differ­ent in­ten­si­ties, e.g. when my stom­ach is up­set, the in­ten­sity of my stom­ach pain waxes and wanes a fair bit, un­til it grad­u­ally fades to not be­ing no­tice­able any­more. Same goes for con­scious plea­sures.

It’s pos­si­ble such vari­a­tions in in­ten­sity are en­tirely ac­counted for by their de­grees of differ­ent kinds of unity, or by some other plau­si­ble fea­ture(s) of moral weight, but maybe not. If there is some ad­di­tional “in­ten­sity” vari­able for valenced as­pects of con­scious ex­pe­rience, it would seem a good can­di­date for af­fect­ing moral weight.

From my own ex­pe­rience, my guess is that I would en­dure ~10 sec­onds of the most in­tense pain I’ve ever ex­pe­rienced to avoid ex­pe­rienc­ing ~2 months of the low­est level of dis­com­fort that I’d bother to call “dis­com­fort.” That very low level of dis­com­fort might sug­gest a lower bound on “in­ten­sity of valenced as­pects of ex­pe­rience” that I in­tu­itively morally care about, but “the most in­tense pain I’ve ever ex­pe­rienced” prob­a­bly is not the high­est in­ten­sity of valenced as­pects of ex­pe­rience it is pos­si­ble to ex­pe­rience — prob­a­bly not even close. You could con­sider similar trades to get a sense for how much you in­tu­itively value “in­ten­sity of ex­pe­rience,” at least in your own case.

Mo­ral weights of var­i­ous species

If we thought about all this more care­fully and col­lected as much rele­vant em­piri­cal data as pos­si­ble, what moral weights might we as­sign to differ­ent species?

Whereas my prob­a­bil­ities of moral pa­tient­hood for any an­i­mal as com­plex as a crab only range from 0.2 − 1, the plau­si­ble ranges of moral weight seem like they could be much larger. I don’t feel like I’d be sur­prised if an om­ni­scient be­ing told me that my ex­trap­o­lated val­ues would as­sign pigs more moral weight than hu­mans, and I don’t feel like I’d be sur­prised if an om­ni­scient be­ing told me my ex­trap­o­lated val­ues would as­sign pigs .0001 moral weight (as­sum­ing they were moral pa­tients at all).

To illus­trate how this might work, be­low are some guesses at some “plau­si­ble ranges of moral weight” for a va­ri­ety of species that some­one might come to, if they had in­tu­itions like those ex­plained be­low.

  • Hu­mans: 1 (baseline)

  • Chim­panzees: 0.001 − 3

  • Pigs: 0.0005 − 3.5

  • Cows: 0.0001 − 5

  • Chick­ens: 0.0005 − 7

  • Rain­bow trout: 0.00005 − 10

  • Fruit fly: 0.000001 − 20

(But when­ever you’re tempted to mul­ti­ply such num­bers by some­thing, re­mem­ber two-en­velope effects!)

What in­tu­itions might lead to some­thing like these ranges?

  • An in­tu­ition to not place much value on “com­plex/​higher-or­der” di­men­sions of moral weight — such as “ful­l­ness of self-aware­ness” or “ca­pac­ity for re­flect­ing on one’s holis­tic life satis­fac­tion” — above and be­yond the sub­jec­tive du­ra­tion and “in­ten­sity” of rel­a­tively “brute” plea­sure/​pain/​hap­piness/​sad­ness that (in hu­mans) tends to ac­com­pany re­flec­tion, self-aware­ness, etc.

  • An in­tu­ition to care more about sub­ject unity and phe­nom­e­nal unity than about such higher-or­der di­men­sions of moral weight.

  • An in­tu­ition to care most of all about clock speed and ex­pe­rience in­ten­sity (if in­ten­sity is dis­tinct from unity).

  • In­tu­itions that if the an­i­mal species listed above are con­scious, they:

    • have very lit­tle of the higher-or­der di­men­sions of con­scious ex­pe­rience,

    • have faster clock speeds than hu­mans (the smaller the faster),

    • prob­a­bly have lower “in­ten­sity” of ex­pe­rience, but might ac­tu­ally have some­what greater in­ten­sity of ex­pe­rience (e.g. be­cause they aren’t dis­tracted by lin­guis­tic thought),

    • have mod­er­ately less sub­ject unity and phe­nom­e­nal unity, es­pe­cially of the di­achronic sort.

Un­der these in­tu­itions, the low end of the ranges above could be ex­plained by the pos­si­bil­ity that in­ten­sity of con­scious ex­pe­rience diminishes dra­mat­i­cally with brain com­plex­ity and flex­i­bil­ity, while the high end of the ranges above could be ex­plained by the pos­si­bil­ity con­cern­ing faster clock speeds for smaller an­i­mals, the pos­si­bil­ity of lesser unity in non-hu­man an­i­mals (which one might value at >1x for the same rea­son one might value a du­ally-con­scious split-brain pa­tient at ~2x), and the pos­si­bil­ity for greater in­ten­sity of ex­pe­rience in sim­pler an­i­mals.

Other writ­ings on moral weight