My reading of this is that implicit in your definition of “welfare” is the idea that being deserving of welfare comes with an inherent trade off that humans (and society) make in order to help you avoid suffering.
Take your thought experiment with the skin tissue. Say that I did say it was deserving of welfare, what would this mean? In a vacuum some people might think it’s silly, but most would probably just shrug it off as an esoteric but harmless belief. However, if by arguing that it was deserving of welfare I was potentially blocking a highly important experiment that might end up curing skin cancer, people would probably no longer view my belief as innocuous.
As such maybe a good way to approach “deserving welfare” is not to think of it as a binary, but to think of it as a spectrum. The higher a being rates on that spectrum, the more you would be willing to sacrifice in order to make sure they don’t suffer. A mouse is deserving of welfare to the extent that most people agree torturing one for fun should be illegal, but not so deserving of welfare that most people would agree torturing one in order to get a solid chance of curing cancer should be illegal.
That rates higher than a bunch of skin cells hooked up to a speaker/motor, where you would probably get shrugs regardless of the situation.
You could then look at what things have in common as they rate higher/lower on the welfare scale, and try to pin down the uniformly present qualities, and use those as indicators of increasing welfare worthiness. You could do this based on the previously mentioned “most people” reactions, or based on your own gut reaction.
I think you’re right that imaging deserving welfare on a spectrum and suffering should be one as well. However, people would still place things radically differently on said spectrum and that confuses me. As I said, any animal that had LLM level capabilities would be pretty universally agreed upon to be deserving of some welfare. People remark that LLMs are stochastic parrots but if an actual parrot could talk as well as and LLM people would be even more empathetic toward parrots. I would be really uncomfortable euthanizing such a hypothetical parrot whereas I would not be uncomfortable turning off a datacenter mid token generation. I don’t know why this is.
I guess all this boils down to your last point, what uniformly present qualities do I look for? It seems that everything I empathize with has a nervous system that evolved. But that seems so arbitrary and my intuition is that there is nothing special about evolution even is gradient descent on our current architectures is not a method of generating SDoW. I also feel like formalizing consensus gut checks post hoc is not the right approach to moral problems in general.
I think I feel the same sort of ‘What if we just said EVERYTHING deserves welfare?’ thought. I care for my birds, but I also care for my plants, and care for my books, each in their own way.
Like, if someone built this small skin-device-creature, and then someone else came along and smashed it then burned the pieces, I think I would be a little sad for the universe to have ‘lost’ that object. So there’s SOEMTHING there that is unrelated to “can it experience pain?”, for me.
My reading of this is that implicit in your definition of “welfare” is the idea that being deserving of welfare comes with an inherent trade off that humans (and society) make in order to help you avoid suffering.
Take your thought experiment with the skin tissue. Say that I did say it was deserving of welfare, what would this mean? In a vacuum some people might think it’s silly, but most would probably just shrug it off as an esoteric but harmless belief. However, if by arguing that it was deserving of welfare I was potentially blocking a highly important experiment that might end up curing skin cancer, people would probably no longer view my belief as innocuous.
As such maybe a good way to approach “deserving welfare” is not to think of it as a binary, but to think of it as a spectrum. The higher a being rates on that spectrum, the more you would be willing to sacrifice in order to make sure they don’t suffer. A mouse is deserving of welfare to the extent that most people agree torturing one for fun should be illegal, but not so deserving of welfare that most people would agree torturing one in order to get a solid chance of curing cancer should be illegal.
That rates higher than a bunch of skin cells hooked up to a speaker/motor, where you would probably get shrugs regardless of the situation.
You could then look at what things have in common as they rate higher/lower on the welfare scale, and try to pin down the uniformly present qualities, and use those as indicators of increasing welfare worthiness. You could do this based on the previously mentioned “most people” reactions, or based on your own gut reaction.
I think you’re right that imaging deserving welfare on a spectrum and suffering should be one as well. However, people would still place things radically differently on said spectrum and that confuses me. As I said, any animal that had LLM level capabilities would be pretty universally agreed upon to be deserving of some welfare. People remark that LLMs are stochastic parrots but if an actual parrot could talk as well as and LLM people would be even more empathetic toward parrots. I would be really uncomfortable euthanizing such a hypothetical parrot whereas I would not be uncomfortable turning off a datacenter mid token generation. I don’t know why this is.
I guess all this boils down to your last point, what uniformly present qualities do I look for? It seems that everything I empathize with has a nervous system that evolved. But that seems so arbitrary and my intuition is that there is nothing special about evolution even is gradient descent on our current architectures is not a method of generating SDoW. I also feel like formalizing consensus gut checks post hoc is not the right approach to moral problems in general.
I think I feel the same sort of ‘What if we just said EVERYTHING deserves welfare?’ thought. I care for my birds, but I also care for my plants, and care for my books, each in their own way.
Like, if someone built this small skin-device-creature, and then someone else came along and smashed it then burned the pieces, I think I would be a little sad for the universe to have ‘lost’ that object. So there’s SOEMTHING there that is unrelated to “can it experience pain?”, for me.