Every now and then in discussions of animal welfare, I see the idea that the “amount” of their subjective experience should be weighted by something like their total amount of neurons. Is there a writeup somewhere of what the reasoning behind that intuition is? Because it doesn’t seem intuitive to me at all.
From something like a functionalist perspective, where pleasure and pain exist because they have particular functions in the brain, I would not expect pleasure and pain to become more intense merely because the brain happens to have more neurons. Rather I would expect that having more neurons may 1) give the capability to experience anything like pleasure and pain at all 2) make a broader scale of pleasure and pain possible, ifthat happens to be useful for evolutionary purposes.
For a comparison, consider the sharpness of our senses. Humans have pretty big brains (though our brains are not the biggest), but that doesn’t mean that all of our senses are better than those of all the animals with smaller brains. Eagles have sharper vision, bats have better hearing, dogs have better smell, etc..
Humans would rank quite well if you took the average of all of our senses—we’re elite generalists while lots of the animals that beat us on a particular sense are specialized to that sense in particular—but still, it’s not straightforwardly the case that bigger brain = sharper experience. Eagles have sharper vision because they are specialized into a particular niche that makes use of that sharper vision.
On a similar basis, I would expect that even if a bigger brain makes a broader scale of pain/pleasure possible in principle, evolution will only make use of that potential if there is a functional need for it. (Just as it invests neural capacity in a particular sense if the organism is in a niche where that’s useful.) And I would expect a relatively limited scale to already be sufficient for most purposes. It doesn’t seem to take that much pain before something becomes a clear DO NOT WANT (whether for a human or an animal), and past that the only clear benefit for a wider scale is if you regularly need to have multiple sources of strong pain so that the organism has to choose the lesser pain.
What I think is the case is that more intelligent animals—especially more social animals—have more distinct sources of pleasure and pain (we can feel a broad range of social emotions, both good and bad, that solitary animals lack). And possibly extra neural capacity would be useful for that broader spectrum of types. But I would think that the broader spectrum of potential sources for pleasure and pain would still not require a greater spectrum of intensity.
Of course, the human scale for pleasure and pain seems to be much wider than you’d intuitively think necessary, so it’s probably not that our spectrum of intensity has been selected for being exactly the necessary one. But most people’s day-to-day experience does not make use of such a broad scale. In fact, most people are outright incapable of even imagining what the extreme ends of the scale are like. That would seem to suggests to me that the existence of the extremes is more of an evolutionary spandrel than anything truly necessary for guiding daily behavior, so that the “typical useful human day-to-day range” and the “typical useful animal day-to-day range” would be similar. And I don’t see why the typical useful range would require a particularly high neuron count, past the point where you can have it at all.
(In the above, I’ve for simplicity assumed that pain and suffering are the same. I don’t actually believe that they are the same, but I’m very unsure of which animals I expect to be capable of suffering on top of just feeling pain/pleasure. In any case, you could apply basically all the same reasoning to the question of suffering.)
To me the core of neuron counting as an intuition is that all living beings seem to have a depth to their reactions that scales with the size of their mind. There’s a richness to a human mind in its reactions to the world which other animals don’t have, just as dogs have a deeper interaction with everything than insects do.
This is pretty strongly correlated with our emotions for why/when we care about creatures, how much we ‘recognize’ their depth. This is why people are most often interested when learning that certain animals have more depth than we might intuitively think.
As for whether there is an article, I don’t know of any that I like, but I’ll lay out some thoughts. This will be somewhat rambly, in part to try to give some stronger reasons, but also related ideas that aren’t spelled out enough.
One important consideration I often have to keep in mind in these sorts of discussions, is that when we evaluate moral worth, we do not just care about instantaneous pleasure/pain, but rather an intricate weighting of hundreds of different considerations. This very well may mean that we care about weighting by richness of mind, even if we determine that a scale would say that two beings experience the ~same level of pain.
Duplication: If we aren’t weighting by ‘richness of mind’ or some related factor, then we still end up with a similar weighting factor by not considering the mind as one solid thing with a core singular self receiving input.
If a simple system can have pain just as intense as a more complex system, then why wouldn’t the subsystems within a large brain have their own intense ‘experiences’?
I experience a twinge of discomfort when thinking of an embarrassing event some years ago. To my ‘self’ this is a relatively minor pain, but my brain is using substantially more neurons than lie within a beetle. More subsystems fire. While the core mind handles this as minor sensation, that small subsystem of the mind may be receiving a big update locally, it is just the architect overseeing everything else doesn’t need to directly perceive the sensation as more than a small hit.
A mild form of this drives the “cognitive complexity ⇒ pain is more impactful” intuition. Pain filters through multiple layers of your mind, updating them. To an insect, this is simple conditioning to avoid-on-sense. For a dog, similar but with added updates to local context. For humans, it can have farther reaching consequences of how much they trust others, themselves, their safety locally but also in general. A mouse may just get a “don’t go there” when shocked, while a human gets “don’t go there; not safe; I hate being wrong about being safe”, etc.
I would not expect pleasure and pain to become more intense merely because the brain happens to have more neurons
To point at this specifically, the richness of mind provides an answer of it being because pain ties into far more.
While my eyes and the eyes of a mouse are likely both providing a similar sense of “BLUE=true, LIGHT=true, SKY=true” when looking up during the sky, by the time it reaches my ‘self’ there is a massive amount more implicature and feeling embedded in that sensation. Mice have their instincts give a sense of openness and wariness of predators paired with a handful of learned associations in life. Humans have all the ingrained instincts, openness, warmth, life, safety, learned associations like variation based on the precise tinge and cloudiness of the sky, specific times in their life, and so on.
In a way, I view collapsing these all under one sensation like “seeing-sky” as very reductive. While they both effectively have “SEEING SKY=true”, it is more that the human is experiencing dozens of different sensations while the mouse is experiencing half a dozen.
I find it very plausible that pleasure/pain is similar. We do not just get a “PAIN=true”, we get a burst of a dozen of different sensations related to the pain. Different reactions to those sensations bursting out from the mind.
This sort of bundling under one term while ignoring volume is very questionable. If we take the naive application of ‘PAIN=true’, then we would consider a mind that can do lots of parallel processing as having the same degree of pain when it receives
This is similar but not quite the same as the Duplication view, where Duplication is more about isolated subcircuits of the brain mattering, where this section is about the ‘self’ actually receiving a lot more inputs where bundling of concepts obscures the reality. I think a lot of this is because of iffy ontology, where human intuition is tracking some of these factors, but haven’t been pinned down and so is hard to talk about for most people.
I think the question is less “Why do we think that the objective comparison between these things should be anchored on neuron count?” And more like “How do we even begin to make a subjective value judgement between these things”.
In that case, I would say that when an animal is experiencing pleasure/pain, that probably takes the form of information in the brain. Information content is roughly equivalent to neuron count. All I can really say is that I want less suffering-like information processing in the universe.
I have made roughly this argument for relative moral weight, but I’m not comfortable with it.
I entirely agree that the subjective “volume” of pain is more likely tuned by evolution; (edit:) but the functional effectiveness of the pain signal doesn’t seem to be what we care about or give moral worth to, but rather the degree of suffering, which must be based on some property of the information processing in the brain, and therefore likely related to brain complexity.
For me neuron count is a very rough approximation based on reasoning that any reasonable way of defining moral worth must be at least on a continuum. It seems very strange to suppose that moral worth (or the type of consciousness that confers it) it suddenly appears when a critical threshold is passed, and is entirely absent just below that threshold. One bear, beetle,or bacterium would have had no consciousness or moral worth, and then suddenly its offspring has them in full while being nearly indistinguishable in behavior.
I’ve had the opportunity to think about neural substrates of consciousness in relatively a lot of depth. I still don’t have a good definition (and think it’s ultimately a matter of preference) to whom we assign moral worth. But to even approach being a sensible and internally consistent position, it seems like it’s got to be a continuous value. And neuron count is as close as I can get, since that’s a very rough proxy for the richness of information processing in that system on every dimension. So whichever one(s) we settle on, neuron count will be in the wild ballpark.
A better final answer will count only the neurons and synapses contributing to whatever-it-is and will probably count them as a nonlinear function of some sort, and go into more depth. But neuron count is the best starting point I can think of.
Neuron count intuitively seems to be a better proxy for the variety/complexity/richness of positive experience. Then you can have an argument about how you wouldn’t want to just increase intensity of pleasure, that just a relative number. That what matters is that pleasure is interesting. And so you would assign lesser weights to less rich experience. You can also generalize this argument to negative experiences—maybe you don’t want to consider pain to be ten times worse just because someone multiplied some number by 10.
But I would think that the broader spectrum of potential sources for pleasure and pain would still not require a greater spectrum of intensity.
This is totally valid. Neuron count is a poor, noisy proxy for conscious experience even in human brains.
See my comment here. The cerebellum is the human brain region with the highest neuron count, but people born without a cerebellum don’t have any impact to their conscious experience. It only affects motor control.
From something like a functionalist perspective, where pleasure and pain exist because they have particular functions in the brain, I would not expect pleasure and pain to become more intense merely because the brain happens to have more neurons.
For clarity, my first reading of this was to consider the possible interpretation of a binary distinction: That either the whole entity can experience pain or not. And thus we’d have to count the entities as a measure of welfare.
I agree that weighing by neurons doesn’t seem appropriate when pain is not a result of individual neurons but their assembly. Weighing by neurons then is not much different from weighing by weight conditioned on having the required complexity. But why would a large being have a higher weight than a smaller one, everything considered equal? Wouldn’t that priviledge large animals (and even incentivise growth)?
For a comparison, consider the sharpness of our senses.
A comment on possible misinterpretations: You should rule out (if intended) that people think you equate sense resolution with pain sensation intensity. I think you don’t, but I’m not very sure.
What I think is the case is that more intelligent animals—especially more social animals—have more distinct sources of pleasure and pain (we can feel a broad range of social emotions, both good and bad, that solitary animals lack).
Yes, social animals often possess more elaborate ways to express pain, including facial expressions, vocalizations, and behavioral changes, which can serve communicative functions within their group. However, suppression of pain expression is also widespread, especially in species where showing pain could lower social rank or make an individual vulnerable to predation or aggression[1]. The question is what this expression tells us about the sensation. For example, assuming introversion is linked to this expression, does it imply that extroverts feel more pain? I agree that more complex processing is needed to detect (reflect) on pain. Pain expression can serve signalling functions such as alerting without reflection, but for more specific adaptation, such as familial care, require empathy, which arguably requires modeling other’s perceptions. Because expressing pain is suppressed in some species, we have to face this dichotomy: If the expression of pain informs about the amount or intensity of pain, then it follows that the same amount of injury can lead to very different amounts of pain, including none, even within a species. But if the expression of pain doesn’t tell us anything about the amount of pain, then the question is, what does?
I think the central argument, is that subjective experience is ostensibly more profound the more information it integrates with, both at a single moment and over time. I would think of it, or any experience as, the depth of cognition and attention the stimuli controls coherence over (IE, # of feedback loops controlled or reoriented by that single bad experience—and the neural re-shuffling it requires), extrapolated over how long that ‘painful’ reprocessing continues to manifest as lived stimuli. If you have the brain of a goldfish, the pain of pinch oscillates through a significantly lower number of attention feedback loops than a human, with a much higher set of cognitive faculties getting ‘jarred’ and attention stolen to get away from that pinch. Secondly, the degree of coherence our subjectivity inhabits is likely loosely correlated as a consequence of having higher long term retention faculties. If felt pain is solely a ‘miss’ within any agent objective function, then even the smallest ML algorithms ‘hurt’ as they are. IE, subjectivity is emergent from the depth and scale of these feedback loops (which are required by nature), but not isomorphic to them (value function miss).
I don’t have a detailed writeup, but this seems straightforward enough to fit in this comment: you’re conducting your moral reasoning backwards, which is why it looks like other people have a sophisticated intuition about neurobiology you don’t.
The “moral intuition”[1] you start with is that insects[2] aren’t worth as much as people, and then if you feel like you need to justify that, you can use your knowledge of the current best understanding of animal cognition to construct a metric that fits of as much complexity as you like.
Every now and then in discussions of animal welfare, I see the idea that the “amount” of their subjective experience should be weighted by something like their total amount of neurons. Is there a writeup somewhere of what the reasoning behind that intuition is? Because it doesn’t seem intuitive to me at all.
From something like a functionalist perspective, where pleasure and pain exist because they have particular functions in the brain, I would not expect pleasure and pain to become more intense merely because the brain happens to have more neurons. Rather I would expect that having more neurons may 1) give the capability to experience anything like pleasure and pain at all 2) make a broader scale of pleasure and pain possible, if that happens to be useful for evolutionary purposes.
For a comparison, consider the sharpness of our senses. Humans have pretty big brains (though our brains are not the biggest), but that doesn’t mean that all of our senses are better than those of all the animals with smaller brains. Eagles have sharper vision, bats have better hearing, dogs have better smell, etc..
Humans would rank quite well if you took the average of all of our senses—we’re elite generalists while lots of the animals that beat us on a particular sense are specialized to that sense in particular—but still, it’s not straightforwardly the case that bigger brain = sharper experience. Eagles have sharper vision because they are specialized into a particular niche that makes use of that sharper vision.
On a similar basis, I would expect that even if a bigger brain makes a broader scale of pain/pleasure possible in principle, evolution will only make use of that potential if there is a functional need for it. (Just as it invests neural capacity in a particular sense if the organism is in a niche where that’s useful.) And I would expect a relatively limited scale to already be sufficient for most purposes. It doesn’t seem to take that much pain before something becomes a clear DO NOT WANT (whether for a human or an animal), and past that the only clear benefit for a wider scale is if you regularly need to have multiple sources of strong pain so that the organism has to choose the lesser pain.
What I think is the case is that more intelligent animals—especially more social animals—have more distinct sources of pleasure and pain (we can feel a broad range of social emotions, both good and bad, that solitary animals lack). And possibly extra neural capacity would be useful for that broader spectrum of types. But I would think that the broader spectrum of potential sources for pleasure and pain would still not require a greater spectrum of intensity.
Of course, the human scale for pleasure and pain seems to be much wider than you’d intuitively think necessary, so it’s probably not that our spectrum of intensity has been selected for being exactly the necessary one. But most people’s day-to-day experience does not make use of such a broad scale. In fact, most people are outright incapable of even imagining what the extreme ends of the scale are like. That would seem to suggests to me that the existence of the extremes is more of an evolutionary spandrel than anything truly necessary for guiding daily behavior, so that the “typical useful human day-to-day range” and the “typical useful animal day-to-day range” would be similar. And I don’t see why the typical useful range would require a particularly high neuron count, past the point where you can have it at all.
(In the above, I’ve for simplicity assumed that pain and suffering are the same. I don’t actually believe that they are the same, but I’m very unsure of which animals I expect to be capable of suffering on top of just feeling pain/pleasure. In any case, you could apply basically all the same reasoning to the question of suffering.)
To me the core of neuron counting as an intuition is that all living beings seem to have a depth to their reactions that scales with the size of their mind. There’s a richness to a human mind in its reactions to the world which other animals don’t have, just as dogs have a deeper interaction with everything than insects do. This is pretty strongly correlated with our emotions for why/when we care about creatures, how much we ‘recognize’ their depth. This is why people are most often interested when learning that certain animals have more depth than we might intuitively think.
As for whether there is an article, I don’t know of any that I like, but I’ll lay out some thoughts. This will be somewhat rambly, in part to try to give some stronger reasons, but also related ideas that aren’t spelled out enough.
One important consideration I often have to keep in mind in these sorts of discussions, is that when we evaluate moral worth, we do not just care about instantaneous pleasure/pain, but rather an intricate weighting of hundreds of different considerations. This very well may mean that we care about weighting by richness of mind, even if we determine that a scale would say that two beings experience the ~same level of pain.
Duplication: If we aren’t weighting by ‘richness of mind’ or some related factor, then we still end up with a similar weighting factor by not considering the mind as one solid thing with a core singular self receiving input. If a simple system can have pain just as intense as a more complex system, then why wouldn’t the subsystems within a large brain have their own intense ‘experiences’? I experience a twinge of discomfort when thinking of an embarrassing event some years ago. To my ‘self’ this is a relatively minor pain, but my brain is using substantially more neurons than lie within a beetle. More subsystems fire. While the core mind handles this as minor sensation, that small subsystem of the mind may be receiving a big update locally, it is just the architect overseeing everything else doesn’t need to directly perceive the sensation as more than a small hit.
A mild form of this drives the “cognitive complexity ⇒ pain is more impactful” intuition. Pain filters through multiple layers of your mind, updating them. To an insect, this is simple conditioning to avoid-on-sense. For a dog, similar but with added updates to local context. For humans, it can have farther reaching consequences of how much they trust others, themselves, their safety locally but also in general. A mouse may just get a “don’t go there” when shocked, while a human gets “don’t go there; not safe; I hate being wrong about being safe”, etc.
To point at this specifically, the richness of mind provides an answer of it being because pain ties into far more. While my eyes and the eyes of a mouse are likely both providing a similar sense of “BLUE=true, LIGHT=true, SKY=true” when looking up during the sky, by the time it reaches my ‘self’ there is a massive amount more implicature and feeling embedded in that sensation. Mice have their instincts give a sense of openness and wariness of predators paired with a handful of learned associations in life. Humans have all the ingrained instincts, openness, warmth, life, safety, learned associations like variation based on the precise tinge and cloudiness of the sky, specific times in their life, and so on. In a way, I view collapsing these all under one sensation like “seeing-sky” as very reductive. While they both effectively have “SEEING SKY=true”, it is more that the human is experiencing dozens of different sensations while the mouse is experiencing half a dozen. I find it very plausible that pleasure/pain is similar. We do not just get a “PAIN=true”, we get a burst of a dozen of different sensations related to the pain. Different reactions to those sensations bursting out from the mind.
This sort of bundling under one term while ignoring volume is very questionable. If we take the naive application of ‘PAIN=true’, then we would consider a mind that can do lots of parallel processing as having the same degree of pain when it receives
This is similar but not quite the same as the Duplication view, where Duplication is more about isolated subcircuits of the brain mattering, where this section is about the ‘self’ actually receiving a lot more inputs where bundling of concepts obscures the reality. I think a lot of this is because of iffy ontology, where human intuition is tracking some of these factors, but haven’t been pinned down and so is hard to talk about for most people.
I think the question is less “Why do we think that the objective comparison between these things should be anchored on neuron count?” And more like “How do we even begin to make a subjective value judgement between these things”.
In that case, I would say that when an animal is experiencing pleasure/pain, that probably takes the form of information in the brain. Information content is roughly equivalent to neuron count. All I can really say is that I want less suffering-like information processing in the universe.
See Why Neuron Counts Shouldn’t Be Used as Proxies for Moral Weight and maybe also Is Brain Size Morally Relevant?
I have made roughly this argument for relative moral weight, but I’m not comfortable with it.
I entirely agree that the subjective “volume” of pain is more likely tuned by evolution; (edit:) but the functional effectiveness of the pain signal doesn’t seem to be what we care about or give moral worth to, but rather the degree of suffering, which must be based on some property of the information processing in the brain, and therefore likely related to brain complexity.
For me neuron count is a very rough approximation based on reasoning that any reasonable way of defining moral worth must be at least on a continuum. It seems very strange to suppose that moral worth (or the type of consciousness that confers it) it suddenly appears when a critical threshold is passed, and is entirely absent just below that threshold. One bear, beetle,or bacterium would have had no consciousness or moral worth, and then suddenly its offspring has them in full while being nearly indistinguishable in behavior.
I’ve had the opportunity to think about neural substrates of consciousness in relatively a lot of depth. I still don’t have a good definition (and think it’s ultimately a matter of preference) to whom we assign moral worth. But to even approach being a sensible and internally consistent position, it seems like it’s got to be a continuous value. And neuron count is as close as I can get, since that’s a very rough proxy for the richness of information processing in that system on every dimension. So whichever one(s) we settle on, neuron count will be in the wild ballpark.
A better final answer will count only the neurons and synapses contributing to whatever-it-is and will probably count them as a nonlinear function of some sort, and go into more depth. But neuron count is the best starting point I can think of.
Neuron count intuitively seems to be a better proxy for the variety/complexity/richness of positive experience. Then you can have an argument about how you wouldn’t want to just increase intensity of pleasure, that just a relative number. That what matters is that pleasure is interesting. And so you would assign lesser weights to less rich experience. You can also generalize this argument to negative experiences—maybe you don’t want to consider pain to be ten times worse just because someone multiplied some number by 10.
Isn’t pain in both wings worse than in one?
This is totally valid. Neuron count is a poor, noisy proxy for conscious experience even in human brains.
See my comment here. The cerebellum is the human brain region with the highest neuron count, but people born without a cerebellum don’t have any impact to their conscious experience. It only affects motor control.
Some thoughts.
For clarity, my first reading of this was to consider the possible interpretation of a binary distinction: That either the whole entity can experience pain or not. And thus we’d have to count the entities as a measure of welfare.
I agree that weighing by neurons doesn’t seem appropriate when pain is not a result of individual neurons but their assembly. Weighing by neurons then is not much different from weighing by weight conditioned on having the required complexity. But why would a large being have a higher weight than a smaller one, everything considered equal? Wouldn’t that priviledge large animals (and even incentivise growth)?
A comment on possible misinterpretations: You should rule out (if intended) that people think you equate sense resolution with pain sensation intensity. I think you don’t, but I’m not very sure.
Yes, social animals often possess more elaborate ways to express pain, including facial expressions, vocalizations, and behavioral changes, which can serve communicative functions within their group. However, suppression of pain expression is also widespread, especially in species where showing pain could lower social rank or make an individual vulnerable to predation or aggression[1]. The question is what this expression tells us about the sensation. For example, assuming introversion is linked to this expression, does it imply that extroverts feel more pain? I agree that more complex processing is needed to detect (reflect) on pain. Pain expression can serve signalling functions such as alerting without reflection, but for more specific adaptation, such as familial care, require empathy, which arguably requires modeling other’s perceptions. Because expressing pain is suppressed in some species, we have to face this dichotomy: If the expression of pain informs about the amount or intensity of pain, then it follows that the same amount of injury can lead to very different amounts of pain, including none, even within a species. But if the expression of pain doesn’t tell us anything about the amount of pain, then the question is, what does?
See Persistence of pain in humans and other mammals
I think the central argument, is that subjective experience is ostensibly more profound the more information it integrates with, both at a single moment and over time. I would think of it, or any experience as, the depth of cognition and attention the stimuli controls coherence over (IE, # of feedback loops controlled or reoriented by that single bad experience—and the neural re-shuffling it requires), extrapolated over how long that ‘painful’ reprocessing continues to manifest as lived stimuli. If you have the brain of a goldfish, the pain of pinch oscillates through a significantly lower number of attention feedback loops than a human, with a much higher set of cognitive faculties getting ‘jarred’ and attention stolen to get away from that pinch. Secondly, the degree of coherence our subjectivity inhabits is likely loosely correlated as a consequence of having higher long term retention faculties. If felt pain is solely a ‘miss’ within any agent objective function, then even the smallest ML algorithms ‘hurt’ as they are. IE, subjectivity is emergent from the depth and scale of these feedback loops (which are required by nature), but not isomorphic to them (value function miss).
I don’t have a detailed writeup, but this seems straightforward enough to fit in this comment: you’re conducting your moral reasoning backwards, which is why it looks like other people have a sophisticated intuition about neurobiology you don’t.
The “moral intuition”[1] you start with is that insects[2] aren’t worth as much as people, and then if you feel like you need to justify that, you can use your knowledge of the current best understanding of animal cognition to construct a metric that fits of as much complexity as you like.
I’d call mine a “moral oracle” instead. Or a moracle, if you will.
I’m assuming this post is proximately motivated by the Don’t Eat Honey post, but this works for shrimp or whatever too.