I would be really uncomfortable euthanizing such a hypothetical parrot whereas I would not be uncomfortable turning off a datacenter mid token generation.
When you harm an animal you watch a physical body change, and it’s a physical body you empathize with at least somewhat as a fellow living thing (who knows that as living things both you and the parrot will hate dying very much). When you turn off an LLM mid token generation not only is there no physical body, but even if you were to tell an LLM you were going to do so it might not object. It’s only if you looked into its psychology/circuits/features you might see signs of distress, and even that is just strongly suspected not known for sure.
So not only is an LLM not easy to empathize with, but also whether or not any action you might take towards it is negatively impacting its welfare is uncertain.
I also feel like formalizing consensus gut checks post hoc is not the right approach to moral problems in general.
I was not suggesting the method as a solution to the problem of determining what’s worthy of moral welfare from a moral perspective, but rather a solution to the problem of determining how humans usually do so.
From a moral perspective I’m not sure what I’d suggest except to say that I advocate for the precautionary principle and grabbing any low hanging fruit which presents itself and might substantially reduce suffering.
When you harm an animal you watch a physical body change, and it’s a physical body you empathize with at least somewhat as a fellow living thing (who knows that as living things both you and the parrot will hate dying very much). When you turn off an LLM mid token generation not only is there no physical body, but even if you were to tell an LLM you were going to do so it might not object. It’s only if you looked into its psychology/circuits/features you might see signs of distress, and even that is just strongly suspected not known for sure.
So not only is an LLM not easy to empathize with, but also whether or not any action you might take towards it is negatively impacting its welfare is uncertain.
I was not suggesting the method as a solution to the problem of determining what’s worthy of moral welfare from a moral perspective, but rather a solution to the problem of determining how humans usually do so.
From a moral perspective I’m not sure what I’d suggest except to say that I advocate for the precautionary principle and grabbing any low hanging fruit which presents itself and might substantially reduce suffering.