What are your thoughts on David Pearce’s “abolitionist” project? He suggests genetically engineering wild animals to not experience negative valences, but still show the same outward behavior. From a sentientist stand-point, this solves the entire problem, without visibly changing anything.
I think it’s basically impossible, using just genetic engineering. There are documented cases of humans born without the ability to feel pain, and they don’t usually live long: they tend to die in stupid accidents, like jumping off a building, because they didn’t learn the lesson as a kid that that hurts so you shouldn’t do it. Or similarly in leprosy, where the ability to feel pain is lost due to bacterial damage to the nerves (typically as an adult once you have learnt not to do that sort of dumb stunt), the slow progressive disfiguring damage to hands and face from leprosy isn’t directly bacterial, it’s caused by the cumulative effect of a great many minor injuries that the patient doesn’t notice in time, because they can’t feel pain any more.
So, producing the same behavior without negative valences would require a much larger, more detailed world model, able to correctly predict everything that would have hurt or been unpleasant and how the creature would have reacted to it, and then trigger that reaction. Even assuming you can somehow achieve that modelling task in a nervous system as a “philosophical zombie” involving no actual negative valences, just a prediction of their effects on an animal (it’s very unclear to me how to even tell, I suspect “philosophical zombies” are a myth, and if they’re not then they’re a-priori indistinguishable), then we currently have no idea how to bioengineer something like that, and clearly the extra nervous tissue required to do all the extra processing would add a lot to physiological needs. The most plausible approach I can think of to achieve this would be some sort of nanotech cyborging where the extra processing was done in the cyborg parts, which would need to be much more compact and energy efficient than nervous tissue (i.e. roughly Borg-level technology). So it’s an emotionally appealing idea, but I suspect actually even harder to implement than what I proposed. For largish animals, it might actually be technologically easier to just uplift them to sapience and have them join our society.
Rereading https://www.abolitionist.com/, David Pearce doesn’t go into much detail there on his proposal for animals, but even he also appears to recognize that it’s going to take more than just genetic engineering:
But the exponential growth of computer power and nanorobotic technologies means that we can in theory comprehensively re-engineer marine ecosystems too.
A more feasible intermediate proposal might be some form of “reduced harm ecosystem”: Eliminate all parasites and diseases, and where required to keep ecological stability with previous diseases removed, bioengineer some replacement diseases whose symptoms are as mild as possible apart from causing sterility. That still doesn’t eliminate predation, but perhaps we could bioengineer some form of predation-induced unconsciousness, where once a predator is actually eating them and escape is clearly impossible prey animals just pass out. Then that still leaves hunger, accidental injuries or ones from a predator that the prey escaped, and so forth. Radio collars, park rangers, anesthetic darts and vets, or robotic versions of those, would be the best we could do for that.
Pearce has the idea of “gradients of bliss”, which he uses to try to address the problem you raised about insensitivity to pain being hazardous. He thinks that even if all of the valences are positive, the animal can still be motivated to avoid danger if doing so yields an even greater positive valence than the alternatives. So the prey animals are happy to be eaten, but much more happy to run away.
To me, this seems possible in principle. When I feel happy, I’m still motivated at some low level to do things that will make me even happier, even though I was already happy to begin with. But actually implementing “gradients of bliss” in biology seems like a post-ASI feat of engineering.
(By the way, your idea of predation-induced unconsciousness isn’t one I had heard before, it’s interesting.)
Whatever positive valence stopped when you were injured would need to be as extremely strong a motivator as pain is. So somewhere on the level of “I orgasm continuously unless I get hurt, then it stops!” That’s just shifting the valence scale: I think by default it would fail due to hedonic adaptation — brains naturally reset their expectations. That’s the same basic mechanism as opiate addition, and it’s pretty innate to how the brain (or any complex set of biochemical pathways) works: they’re full of long-term feedback loops evolved to try to keep them working even if one component is out-of-whack, say due to a genetic disease.
This is related to a basic issue in the design of Utilitarian ethical systems. As is hopefully well-known, you need your AI to maximize the amount of positive utility (pleasure) not minimize the amount of negative utility (pain), otherwise it will just euthanize everyone before they can next stub their toe. (Obviously getting this wrong is an x-risk, as with pretty-much everything in basic ethical system design,) So you need to carefully set a suitable zero utility level, and that level needs to be low enough that you actually would want the AI to euthanize you if your future utility level for the rest of you life was going to be below that level. So that means the negative utility region is the sort of agonizing pain level where we put animals down, or allow people to sign paperwork for voluntary medical euthanasia. That’s a pretty darned low valence level, well below what we start calling ‘pain’. On a hospital numerical “how much pain are you in?” scale, it’s probably somewhere around spending the rest of your life at an eight or worse: enough pain that you can’t pay much attention to anything else ever.
So my point is, if you just stubbed your toe and are in pain (say a six on the hospital pain scale), then by that offset scale of valence levels (which is what our AIs have to be using for utility in their ethical systems), your utility is still positive. You’re not ready to be euthanized, and not just because you’ll feel better in a few minutes. So by utility standards, our normal positive/negative valence scale has a lot of hedonic adaption already built into it. So what I think you’re suggesting is to reengineer humans and animals so the valence scale matches the utility scale, moving the zero point down to what was previously −8 (pain level 8), lock it there by removing hedonic adaption, and then truncate the remaining part of the scale below the new 0 (i.e. hospital pain levels 9 and 10). Possibly by having the animal pass out?
I can’t immediately tell you why that wouldn’t work, but I note that it’s not the solution evolution came up with, so it’s clearly not optimal. Hedonic adaption basically alters the situation the animal is motivated by to “try to do better than I expected to”. Which is (as various people have observed of consumerism) basically a treadmill. Presumably evolution did this for efficiency, to minimize the computational complexity of the problem. But if the resulting increase in complexity wasn’t that bad, maybe we wouldn’t need to enlarge the pre-frontal cortex (assuming that’s where this planning occurs in most mammals) that much?
Yeah, it’s hard to say whether this would require restructuring the whole reward center in the brain or if the needed functionality is already there, but just needs to be configured with different “settings” to change the origin and truncate everything below zero.
My intuition is that evolution is blind to how our experiences feel in themselves. I think it’s only the relative differences between experiences that matter for signaling in our reward center. This makes a lot of sense when thinking about color and “qualia inversion” thought experiments, but it’s trickier with valence. My color vision could become inverted tomorrow, and it would hardly affect my daily routine. But not so if my valences were inverted.
What are your thoughts on David Pearce’s “abolitionist” project? He suggests genetically engineering wild animals to not experience negative valences, but still show the same outward behavior. From a sentientist stand-point, this solves the entire problem, without visibly changing anything.
I think it’s basically impossible, using just genetic engineering. There are documented cases of humans born without the ability to feel pain, and they don’t usually live long: they tend to die in stupid accidents, like jumping off a building, because they didn’t learn the lesson as a kid that that hurts so you shouldn’t do it. Or similarly in leprosy, where the ability to feel pain is lost due to bacterial damage to the nerves (typically as an adult once you have learnt not to do that sort of dumb stunt), the slow progressive disfiguring damage to hands and face from leprosy isn’t directly bacterial, it’s caused by the cumulative effect of a great many minor injuries that the patient doesn’t notice in time, because they can’t feel pain any more.
So, producing the same behavior without negative valences would require a much larger, more detailed world model, able to correctly predict everything that would have hurt or been unpleasant and how the creature would have reacted to it, and then trigger that reaction. Even assuming you can somehow achieve that modelling task in a nervous system as a “philosophical zombie” involving no actual negative valences, just a prediction of their effects on an animal (it’s very unclear to me how to even tell, I suspect “philosophical zombies” are a myth, and if they’re not then they’re a-priori indistinguishable), then we currently have no idea how to bioengineer something like that, and clearly the extra nervous tissue required to do all the extra processing would add a lot to physiological needs. The most plausible approach I can think of to achieve this would be some sort of nanotech cyborging where the extra processing was done in the cyborg parts, which would need to be much more compact and energy efficient than nervous tissue (i.e. roughly Borg-level technology). So it’s an emotionally appealing idea, but I suspect actually even harder to implement than what I proposed. For largish animals, it might actually be technologically easier to just uplift them to sapience and have them join our society.
Rereading https://www.abolitionist.com/, David Pearce doesn’t go into much detail there on his proposal for animals, but even he also appears to recognize that it’s going to take more than just genetic engineering:
A more feasible intermediate proposal might be some form of “reduced harm ecosystem”: Eliminate all parasites and diseases, and where required to keep ecological stability with previous diseases removed, bioengineer some replacement diseases whose symptoms are as mild as possible apart from causing sterility. That still doesn’t eliminate predation, but perhaps we could bioengineer some form of predation-induced unconsciousness, where once a predator is actually eating them and escape is clearly impossible prey animals just pass out. Then that still leaves hunger, accidental injuries or ones from a predator that the prey escaped, and so forth. Radio collars, park rangers, anesthetic darts and vets, or robotic versions of those, would be the best we could do for that.
Pearce has the idea of “gradients of bliss”, which he uses to try to address the problem you raised about insensitivity to pain being hazardous. He thinks that even if all of the valences are positive, the animal can still be motivated to avoid danger if doing so yields an even greater positive valence than the alternatives. So the prey animals are happy to be eaten, but much more happy to run away.
To me, this seems possible in principle. When I feel happy, I’m still motivated at some low level to do things that will make me even happier, even though I was already happy to begin with. But actually implementing “gradients of bliss” in biology seems like a post-ASI feat of engineering.
(By the way, your idea of predation-induced unconsciousness isn’t one I had heard before, it’s interesting.)
Whatever positive valence stopped when you were injured would need to be as extremely strong a motivator as pain is. So somewhere on the level of “I orgasm continuously unless I get hurt, then it stops!” That’s just shifting the valence scale: I think by default it would fail due to hedonic adaptation — brains naturally reset their expectations. That’s the same basic mechanism as opiate addition, and it’s pretty innate to how the brain (or any complex set of biochemical pathways) works: they’re full of long-term feedback loops evolved to try to keep them working even if one component is out-of-whack, say due to a genetic disease.
This is related to a basic issue in the design of Utilitarian ethical systems. As is hopefully well-known, you need your AI to maximize the amount of positive utility (pleasure) not minimize the amount of negative utility (pain), otherwise it will just euthanize everyone before they can next stub their toe. (Obviously getting this wrong is an x-risk, as with pretty-much everything in basic ethical system design,) So you need to carefully set a suitable zero utility level, and that level needs to be low enough that you actually would want the AI to euthanize you if your future utility level for the rest of you life was going to be below that level. So that means the negative utility region is the sort of agonizing pain level where we put animals down, or allow people to sign paperwork for voluntary medical euthanasia. That’s a pretty darned low valence level, well below what we start calling ‘pain’. On a hospital numerical “how much pain are you in?” scale, it’s probably somewhere around spending the rest of your life at an eight or worse: enough pain that you can’t pay much attention to anything else ever.
So my point is, if you just stubbed your toe and are in pain (say a six on the hospital pain scale), then by that offset scale of valence levels (which is what our AIs have to be using for utility in their ethical systems), your utility is still positive. You’re not ready to be euthanized, and not just because you’ll feel better in a few minutes. So by utility standards, our normal positive/negative valence scale has a lot of hedonic adaption already built into it. So what I think you’re suggesting is to reengineer humans and animals so the valence scale matches the utility scale, moving the zero point down to what was previously −8 (pain level 8), lock it there by removing hedonic adaption, and then truncate the remaining part of the scale below the new 0 (i.e. hospital pain levels 9 and 10). Possibly by having the animal pass out?
I can’t immediately tell you why that wouldn’t work, but I note that it’s not the solution evolution came up with, so it’s clearly not optimal. Hedonic adaption basically alters the situation the animal is motivated by to “try to do better than I expected to”. Which is (as various people have observed of consumerism) basically a treadmill. Presumably evolution did this for efficiency, to minimize the computational complexity of the problem. But if the resulting increase in complexity wasn’t that bad, maybe we wouldn’t need to enlarge the pre-frontal cortex (assuming that’s where this planning occurs in most mammals) that much?
Yeah, it’s hard to say whether this would require restructuring the whole reward center in the brain or if the needed functionality is already there, but just needs to be configured with different “settings” to change the origin and truncate everything below zero.
My intuition is that evolution is blind to how our experiences feel in themselves. I think it’s only the relative differences between experiences that matter for signaling in our reward center. This makes a lot of sense when thinking about color and “qualia inversion” thought experiments, but it’s trickier with valence. My color vision could become inverted tomorrow, and it would hardly affect my daily routine. But not so if my valences were inverted.
Good news! That’s already the way the world works.
What about our pre-human ancestors? Is the twist that humans can’t have negative valences either?