Just to be That Guy I’d like to also remind everyone that animal sentience means vegetarianism, at the very least (and because of the intertwined nature of the dairy, egg, and meat industries, most likely veganism) is a moral imperative, to the extent that your ethical values incorporate sentience at all. Also, I’d go further to say that uplifting to sophonce those animals that we can, once we can at some future time, is also a moral imperative, but that relies on reasoning and values I hold that may not be self-evident to others, such as that increasing the agency of an entity that isn’t drastically misaligned with other entities is fundamentally good.
I do admit to having quite a bit of uncertainty around some of the lines I draw. What if I’m wrong and cows do have a very primitive sort of sapience? That implies we should not raise cows for meat (but I still think it’d be fine to keep them as pets as then eat them after they’ve died of natural causes).
I don’t have so much uncertainty about this that I’d say there is any reasonable chance that fish are sapient though, so I still think that even if you’re worried about cows you should feel fine about eating fish (if you agree with the moral distinctions I make in my other comment).
We’re not talking about sapience though, we’re talking about sentience. Why does the ability to think have any moral relevance? Only possessing qualia, being able to suffer or have joy, is relevant, and most animals likely possess that. I don’t understand the distinctions you’re making in your other comment. There is one, binary distinction that matters: is there something it is like to be this thing, or is there not? If yes, its life is sacred, if no, it is an inanimate object. The line seems absolutely clear to me. Eating fish or shrimp is bad for the same reasons that eating cows or humans is. They are all on the exact same moral level to me. The only meaningful dimension of variation is how complex their qualia are—I’d rather eat entities with less complex qualia over those with more, if I have to choose. But I don’t think the differences are that strong.
That is a very different moral position than the one I hold. I’m curious what your moral intuitions about the qualia of reinforcement learning systems say to you. Have you considered that many machine learning systems seem to have systems which would compute qualia much like a nervous system, and that such systems are indeed more complex than the nervous systems of many living creatures like jellyfish?
I don’t know what to think about all that. I don’t know how to determine what the line is between having qualia and not. I just feel certain that any organism with a brain sufficiently similar to those of humans—certainly all mammals, birds, reptiles, fish, cephalopods, and arthropods—has some sort of internal experience. I’m less sure about things like jellyfish and the like. I suppose the intuition probably comes from the fact that the entities I mentioned seem to actively orient themselves in the world, but it’s hard to say.
I don’t feel comfortable speculating which AIs have qualia, or if any do at all—I am not convinced of functionalism and suspect that consciousness has something to do with the physical substrate, primarily because I can’t imagine how consciousness can be subjectively continuous (one of its most fundamental traits in my experience!) in the absence of a continuously inhabited brain (rather than being a program that can be loaded in and out of anything, and copied endlessly many times, with no fixed temporal relation between subjective moments.)
Just to be That Guy I’d like to also remind everyone that animal sentience means vegetarianism, at the very least (and because of the intertwined nature of the dairy, egg, and meat industries, most likely veganism) is a moral imperative, to the extent that your ethical values incorporate sentience at all. Also, I’d go further to say that uplifting to sophonce those animals that we can, once we can at some future time, is also a moral imperative, but that relies on reasoning and values I hold that may not be self-evident to others, such as that increasing the agency of an entity that isn’t drastically misaligned with other entities is fundamentally good.
I disagree, for the reasons I describe in this comment: https://www.lesswrong.com/posts/Htu55gzoiYHS6TREB/sentience-matters?commentId=wusCgxN9qK8HzLAiw
I do admit to having quite a bit of uncertainty around some of the lines I draw. What if I’m wrong and cows do have a very primitive sort of sapience? That implies we should not raise cows for meat (but I still think it’d be fine to keep them as pets as then eat them after they’ve died of natural causes).
I don’t have so much uncertainty about this that I’d say there is any reasonable chance that fish are sapient though, so I still think that even if you’re worried about cows you should feel fine about eating fish (if you agree with the moral distinctions I make in my other comment).
We’re not talking about sapience though, we’re talking about sentience. Why does the ability to think have any moral relevance? Only possessing qualia, being able to suffer or have joy, is relevant, and most animals likely possess that. I don’t understand the distinctions you’re making in your other comment. There is one, binary distinction that matters: is there something it is like to be this thing, or is there not? If yes, its life is sacred, if no, it is an inanimate object. The line seems absolutely clear to me. Eating fish or shrimp is bad for the same reasons that eating cows or humans is. They are all on the exact same moral level to me. The only meaningful dimension of variation is how complex their qualia are—I’d rather eat entities with less complex qualia over those with more, if I have to choose. But I don’t think the differences are that strong.
That is a very different moral position than the one I hold. I’m curious what your moral intuitions about the qualia of reinforcement learning systems say to you. Have you considered that many machine learning systems seem to have systems which would compute qualia much like a nervous system, and that such systems are indeed more complex than the nervous systems of many living creatures like jellyfish?
I don’t know what to think about all that. I don’t know how to determine what the line is between having qualia and not. I just feel certain that any organism with a brain sufficiently similar to those of humans—certainly all mammals, birds, reptiles, fish, cephalopods, and arthropods—has some sort of internal experience. I’m less sure about things like jellyfish and the like. I suppose the intuition probably comes from the fact that the entities I mentioned seem to actively orient themselves in the world, but it’s hard to say.
I don’t feel comfortable speculating which AIs have qualia, or if any do at all—I am not convinced of functionalism and suspect that consciousness has something to do with the physical substrate, primarily because I can’t imagine how consciousness can be subjectively continuous (one of its most fundamental traits in my experience!) in the absence of a continuously inhabited brain (rather than being a program that can be loaded in and out of anything, and copied endlessly many times, with no fixed temporal relation between subjective moments.)