I don’t really see what’s supposed to be so terribly interesting about pain.
As far as I can see “pain” is just the name of a particular mental signal that happens to have the qualities of causing you to strongly desire the absence of that signal and that strongly calls your attention (consider the rather common phrase “X want(s) Y so badly it hurts” and the various non-body damage related sensations that are described as sorts of pain or feel similar to pain). It’s easy to see why pain evolved to have those qualities and accounting for effects of what is usually seen as default they seem sufficient to explain pain’s moral status.
For example, is it more moral to subject someone to music they strongly desire not to hear than to non-damaging physical pain, if the desire for absence and inability to avert attention is the same for both? Is the imperative to prevent the pain of a child stronger than the one to give the child the last piece of a healthy treat you have been sharing, but you personally don’t care about if the wants are equal? Is there a moral imperative to “convert” masochists to take pleasure in some other way?
My intuition is that the particulars of the mental signal are not relevant and that with a few qualifiers the ethics of pain can be reduced to the ethics of wants.
The outward signs of that mental signal are even further removed from relevance, so the thought experiment reduces to the moral status of the wants of the entities in question. Among humans you can treat wants as more or less directly comparable, but if you assign moral status to those entities in the first place you probably need some sort of normalization. I think unless the entities were deliberately designed for this particular experiment (in which case you probably ignore the particulars of their wants out of UDT/TDT considerations) their normalized wants for the reward signal would be a lot weaker than a humans want for the absence of overwhelmingly strong pain. So I’d keep the $100 the second time.
My intuition is that the particulars of the mental signal are not relevant and that with a few qualifiers the ethics of pain can be reduced to the ethics of wants.
The observation that such an independent disvalue would be convenient doesn’t influence whether treating it as such would accurately represent existent human values, and it seems fairly clear to me that it’s at least not the majority view. It might be a multiplyer, but pain that aligns with the wants of the “sufferer” is usually not considered bad in itself.
Even though many people feel uneasy about tattoos and other body modifications much fewer would argue that they should be outlawed because they are painful (or lobby for mandatory narcotics), its more usual to talk about permanence of the effects. I already mentioned SM. Disapproval of sports tracks painfulness only in so far as it correlates with violence, and even entirely painless violence like in computer games finds that same disapproval. Offering women the option to give birth without narcotics is not generally considered unethical, nor is similar the case for other medical interventions.
The observation that such an independent disvalue would be convenient doesn’t influence whether treating it as such would accurately represent existent human values
I agree, but it influences the optimal research strategy for finding out how to accurately represent existent human values. The fact that pain being an independent disvalue would be convenient implies that we should put a significant effort into investigating that possibility, even if initially it’s not the most likely possibility. (ETA: In case it’s not clear, this assumes that we may not have enough time/resources to investigate every possibility.)
That is not to say I think everything in ethics reduces to the ethics of wants, while I don’t think people do much moralizing about other people suffering pain even though that’s what they want they do a lot of moralizing about other people not doing what would make them happy even if it’s not what they want, and even more so about other people not reaching their potential. Reaching their potential seems to be the main case where forcing someone to do something against their will is acceptable because “it’s for their own good”, and not because it’s required by for fulfilling the rights of others.
I don’t really see what’s supposed to be so terribly interesting about pain.
As far as I can see “pain” is just the name of a particular mental signal that happens to have the qualities of causing you to strongly desire the absence of that signal and that strongly calls your attention (consider the rather common phrase “X want(s) Y so badly it hurts” and the various non-body damage related sensations that are described as sorts of pain or feel similar to pain). It’s easy to see why pain evolved to have those qualities and accounting for effects of what is usually seen as default they seem sufficient to explain pain’s moral status.
For example, is it more moral to subject someone to music they strongly desire not to hear than to non-damaging physical pain, if the desire for absence and inability to avert attention is the same for both? Is the imperative to prevent the pain of a child stronger than the one to give the child the last piece of a healthy treat you have been sharing, but you personally don’t care about if the wants are equal? Is there a moral imperative to “convert” masochists to take pleasure in some other way?
My intuition is that the particulars of the mental signal are not relevant and that with a few qualifiers the ethics of pain can be reduced to the ethics of wants.
The outward signs of that mental signal are even further removed from relevance, so the thought experiment reduces to the moral status of the wants of the entities in question. Among humans you can treat wants as more or less directly comparable, but if you assign moral status to those entities in the first place you probably need some sort of normalization. I think unless the entities were deliberately designed for this particular experiment (in which case you probably ignore the particulars of their wants out of UDT/TDT considerations) their normalized wants for the reward signal would be a lot weaker than a humans want for the absence of overwhelmingly strong pain. So I’d keep the $100 the second time.
This does seem like another approach worth investigating, but the ethics of wants seems to have serious problems of its own (see The Preference Utilitarian’s Time Inconsistency Problem and Hacking the CEV for Fun and Profit for a couple of examples). I was hoping that perhaps pain might be a moral disvalue that we can work out independently of wants.
The observation that such an independent disvalue would be convenient doesn’t influence whether treating it as such would accurately represent existent human values, and it seems fairly clear to me that it’s at least not the majority view. It might be a multiplyer, but pain that aligns with the wants of the “sufferer” is usually not considered bad in itself.
Even though many people feel uneasy about tattoos and other body modifications much fewer would argue that they should be outlawed because they are painful (or lobby for mandatory narcotics), its more usual to talk about permanence of the effects. I already mentioned SM. Disapproval of sports tracks painfulness only in so far as it correlates with violence, and even entirely painless violence like in computer games finds that same disapproval. Offering women the option to give birth without narcotics is not generally considered unethical, nor is similar the case for other medical interventions.
I agree, but it influences the optimal research strategy for finding out how to accurately represent existent human values. The fact that pain being an independent disvalue would be convenient implies that we should put a significant effort into investigating that possibility, even if initially it’s not the most likely possibility. (ETA: In case it’s not clear, this assumes that we may not have enough time/resources to investigate every possibility.)
That is not to say I think everything in ethics reduces to the ethics of wants, while I don’t think people do much moralizing about other people suffering pain even though that’s what they want they do a lot of moralizing about other people not doing what would make them happy even if it’s not what they want, and even more so about other people not reaching their potential. Reaching their potential seems to be the main case where forcing someone to do something against their will is acceptable because “it’s for their own good”, and not because it’s required by for fulfilling the rights of others.