Brent: I think that all ‘pain’, in the sense of ‘inputs that cause an algorithm to change modes specifically to reduce the likelihood of receiving that input again’, is bad.
I think that ‘suffering’, in the sense of ‘loops that a self-referential algorithm gets into when confronted with pain that it cannot reduce the future likelihood of experiencing’.
Social mammals experience much more suffering-per-unit-pain because they have so many layers of modeling built on top of the raw input – they experience the raw input, the model of themselves experiencing the input, the model of their abstracted social entity experiencing the input, the model of their future-self experiencing the input, the models constructed from all their prior linked memories experiencing the input… self-awareness adds extra layers of recursion even on top of this.
One thought that I should really explore further: I think that a strong indicator of ‘suffering’ as opposed to mere ‘pain’ is whether the entity in question attempts to comfort other entities that experience similar sensations. So if we see an animal that exhibits obvious comforting / grooming behavior in response to another animal’s distress, we should definitely pause before slaughtering it for food. The capacity to do so across species boundaries should give us further pause, as should the famed ‘mirror test’. (Note that ‘will comfort other beings showing distress’ is also a good signal for ‘might plausibly cooperate on moral concerns’, so double-win).
I think the key sentence connecting pain and suffering is ‘loops that a self-referential algorithm gets into when confronted with pain that it cannot reduce the future likelihood of experiencing.’ Consider, for example, that meditation improves the ability to untie such loops.
A thermostat turning on the heater is not in pain, and I take this to illustrate that when we talk about pain we’re being inherently anthropocentric. I don’t care about every possible negative reinforcement signal, only those that occur along with a whole lot of human-like correlates (certain emotions, effects on memory formation, activation of concepts that humans would naturally associate with pain, maybe even the effects of certain physiological responses, etc.).
The case of AI is interesting because AIs can differ from the human mind design a lot, while still outputting legible text.
I was not thinking about a thermostat. What I had in mind was a mind design like that of a human but reduced to its essential complexity. For example, you can probably reduce the depth and width of the object recognition by dealing with a block world. You can reduce auditory processing to deal with text directly. I’m not sure to what degree you can do that with the remaining parts but I see no reason it wouldn’t work with memory. For consciousness, my guess would be that the size of the representation of the global workspace scales with the other parts. I do think that consciousness should be easily simulatable with existing hardware in such an environment. If we figure out how to wire things right.
Uhhh, another thing for my reading list (LW is an amazing knowledge retrieval system). Thank you!
I remember encountering that argument/definition of suffering before. It certainly has a bit of explanatory power (you mention meditation) and it somehow feels right. But I don’t understand self-referentiality deep enough to have a mechanistic model of how that should work in my mind. And I’m a bit wary that this perspective conveniently allows us to continue animal eating and (some form of) mass farming. That penalizes the argument for me a bit, motivated cognition etc.
I agree that there is a risk of motivated cognition.
Concerning eating meat, I have committed to the following position: I will vote for reasonable policies to reduce animal suffering and will follow the policies once enacted. I ask everybody to Be Nice, At Least Until You Can Coordinate Meanness.
I’m kind of a moral relativist, but I think there are better and worse morals with respect to sentient flourishing. It is no easy field with counterintuitive dynamics and pitfalls like engineered beings consenting to die for consumption. In the very long term, humanity needs to get much more cooperative also with non-humans, and I don’t think that is consistent with eating non-consenting food.
Relevant excerpt from the Yudkowsky Debate on Animal Consciousness:
I think the key sentence connecting pain and suffering is ‘loops that a self-referential algorithm gets into when confronted with pain that it cannot reduce the future likelihood of experiencing.’ Consider, for example, that meditation improves the ability to untie such loops.
A thermostat turning on the heater is not in pain, and I take this to illustrate that when we talk about pain we’re being inherently anthropocentric. I don’t care about every possible negative reinforcement signal, only those that occur along with a whole lot of human-like correlates (certain emotions, effects on memory formation, activation of concepts that humans would naturally associate with pain, maybe even the effects of certain physiological responses, etc.).
The case of AI is interesting because AIs can differ from the human mind design a lot, while still outputting legible text.
I was not thinking about a thermostat. What I had in mind was a mind design like that of a human but reduced to its essential complexity. For example, you can probably reduce the depth and width of the object recognition by dealing with a block world. You can reduce auditory processing to deal with text directly. I’m not sure to what degree you can do that with the remaining parts but I see no reason it wouldn’t work with memory. For consciousness, my guess would be that the size of the representation of the global workspace scales with the other parts. I do think that consciousness should be easily simulatable with existing hardware in such an environment. If we figure out how to wire things right.
Uhhh, another thing for my reading list (LW is an amazing knowledge retrieval system). Thank you!
I remember encountering that argument/definition of suffering before. It certainly has a bit of explanatory power (you mention meditation) and it somehow feels right. But I don’t understand self-referentiality deep enough to have a mechanistic model of how that should work in my mind. And I’m a bit wary that this perspective conveniently allows us to continue animal eating and (some form of) mass farming. That penalizes the argument for me a bit, motivated cognition etc.
I agree that there is a risk of motivated cognition.
Concerning eating meat, I have committed to the following position: I will vote for reasonable policies to reduce animal suffering and will follow the policies once enacted. I ask everybody to Be Nice, At Least Until You Can Coordinate Meanness.
I’m kind of a moral relativist, but I think there are better and worse morals with respect to sentient flourishing. It is no easy field with counterintuitive dynamics and pitfalls like engineered beings consenting to die for consumption. In the very long term, humanity needs to get much more cooperative also with non-humans, and I don’t think that is consistent with eating non-consenting food.