If I want to divide up X units of torture between two animals, one of which is cuter than the other, from a purely consequentialist position there’s no reason to prefer one to the other.
Well, humans seem to be more upset by images of baby seals being clubbed than by the death of less cute but similarly ‘conscious’ creatures so that might factor into your total suffering calculation but that aside this does seem to follow from your premises.
It might help if you think of me as trying to minimize the number of suffering*consciousness units.
Why is that preference uniquely privileged though? What justifies it over preferring to minimize the number of suffering*(value I assign to animal) units? If I value something about dogs over pigs (lets call it ‘empathy units’ because that is something like a description of the source of my preference) why is that a less justified choice of preference than ‘consciousness’?
If you just genuinely value what you’re calling ‘consciousness’ here over any other measure of value that’s a perfectly reasonable position to take. You seem to want to universalize the preference though and I get the impression that you recognize that it goes against most people’s instinctive preferences. If you want to persuade others to accept your preference ranking (maybe you don’t—it’s not clear to me) then I think you need to come up with a better justification. You should also bear in mind you may find yourself arguing to sacrifice humanity for a super-conscious paperclip maximizer - is that really a position you want to take?
Well, I admit to being one of the approximately seven billion humans who can’t prove their utility functions from first principles. But I think there’s a very convincing argument that consciousness is in fact what we’re actually looking for and naturally taking into account.
Happiness only is happiness, and pain only is pain, insofar as it is perceived by awareness. If a scientist took a nerve cell with a pain receptor, put it in a Petri dish, and stimulated it for a while, I wouldn’t consider this a morally evil act.
I find in my own life that different levels of awareness correspond to different levels of suffering. Although something bad happening to me in a dream is bad, I don’t worry about it nearly as much as I would if it happened when I was awake and fully aware. Likewise, if I’m zonked out on sedatives, I tend to pay less attention to my own pain.
I hypothesize that different animals have different levels of awareness, based on intuition and my knowledge of their nervous systems. In this case, they would be able to experience different levels of suffering. What I meant by saying my utility function multiplied suffering by awareness would have been better phrased as:
Suffering = bad things*awareness
while trying to minimize suffering. This is why, for example, doing all sorts of horrible things to a rock is a morally neutral act, doing them to an insect is probably bad but not anything to lose sleep over, and doing them to a human is a moral problem even if it’s a human I don’t personally like.
Your paperclip example is a classical problem called the utility monster. I don’t really have any especially brilliant solution beyond what has already been said about the issue. To some degree I bite the bullet: if there was some entity whose nervous system was so acute that causing it the slightest amount of pain would correspond to 3^^^3 years of torture for a human being, I’d place high priority on keeping that entity happy.
Well, I admit to being one of the approximately seven billion humans who can’t prove their utility functions from first principles.
But you seem to think (and correct me if I’m misinterpreting) that it would be better if we could. I’m not so sure. And further you seem to think that given that we can’t, it’s still better to override our felt/intrinsic preferences that are hard to fully justify with unnatural preferences that have the sole advantage of being easier to express in simple sentences.
Now I’m not sure you’re actually claiming this but with the pig/dog comparison you seem to be acknowledging that many people value dogs more than pigs (I’m not clear if you have this instinctive preference yourself or not) but that based on some abstract concept of levels of consciousness (that is itself subjective given our current knowledge) we should override our instincts and judge them as of equal value. I’m saying “screw the abstract theory, I value dogs over pigs and that’s sufficient moral justification for me”. I can give you rationalizations for my preference—the idea that dogs have been bred to live with humans for example—but ultimately I don’t think the rationalization is required for moral justification.
But I think there’s a very convincing argument that consciousness is in fact what we’re actually looking for and naturally taking into account.
If this is true, then we should prefer our natural judgements (we value cute baby seals highly, that’s fine—what we’re really valuing is consciousness, not the fact that they share facial features with human babies and so trigger protective instincts). You can’t have it both ways—either we prefer dogs to pigs because they really are ‘more conscious’ or we should fight our instincts and value them equally because our instincts mislead us. I’d agree that what you call ‘consciousness’ or ‘awareness’ is a factor but I don’t think it’s the most important feature influencing our judgements. And I don’t see why it should be.
To some degree I bite the bullet: if there was some entity whose nervous system was so acute that causing it the slightest amount of pain would correspond to 3^^^3 years of torture for a human being, I’d place high priority on keeping that entity happy.
And it’s exactly this sort of thing that makes me inclined to reject utilitarian ethics. If following utilitarian ethics leads to morally objectionable outcomes I see no good reason to think the utilitarian position is right.
Well, humans seem to be more upset by images of baby seals being clubbed than by the death of less cute but similarly ‘conscious’ creatures so that might factor into your total suffering calculation but that aside this does seem to follow from your premises.
Why is that preference uniquely privileged though? What justifies it over preferring to minimize the number of suffering*(value I assign to animal) units? If I value something about dogs over pigs (lets call it ‘empathy units’ because that is something like a description of the source of my preference) why is that a less justified choice of preference than ‘consciousness’?
If you just genuinely value what you’re calling ‘consciousness’ here over any other measure of value that’s a perfectly reasonable position to take. You seem to want to universalize the preference though and I get the impression that you recognize that it goes against most people’s instinctive preferences. If you want to persuade others to accept your preference ranking (maybe you don’t—it’s not clear to me) then I think you need to come up with a better justification. You should also bear in mind you may find yourself arguing to sacrifice humanity for a super-conscious paperclip maximizer - is that really a position you want to take?
Well, I admit to being one of the approximately seven billion humans who can’t prove their utility functions from first principles. But I think there’s a very convincing argument that consciousness is in fact what we’re actually looking for and naturally taking into account.
Happiness only is happiness, and pain only is pain, insofar as it is perceived by awareness. If a scientist took a nerve cell with a pain receptor, put it in a Petri dish, and stimulated it for a while, I wouldn’t consider this a morally evil act.
I find in my own life that different levels of awareness correspond to different levels of suffering. Although something bad happening to me in a dream is bad, I don’t worry about it nearly as much as I would if it happened when I was awake and fully aware. Likewise, if I’m zonked out on sedatives, I tend to pay less attention to my own pain.
I hypothesize that different animals have different levels of awareness, based on intuition and my knowledge of their nervous systems. In this case, they would be able to experience different levels of suffering. What I meant by saying my utility function multiplied suffering by awareness would have been better phrased as:
Suffering = bad things*awareness
while trying to minimize suffering. This is why, for example, doing all sorts of horrible things to a rock is a morally neutral act, doing them to an insect is probably bad but not anything to lose sleep over, and doing them to a human is a moral problem even if it’s a human I don’t personally like.
Your paperclip example is a classical problem called the utility monster. I don’t really have any especially brilliant solution beyond what has already been said about the issue. To some degree I bite the bullet: if there was some entity whose nervous system was so acute that causing it the slightest amount of pain would correspond to 3^^^3 years of torture for a human being, I’d place high priority on keeping that entity happy.
But you seem to think (and correct me if I’m misinterpreting) that it would be better if we could. I’m not so sure. And further you seem to think that given that we can’t, it’s still better to override our felt/intrinsic preferences that are hard to fully justify with unnatural preferences that have the sole advantage of being easier to express in simple sentences.
Now I’m not sure you’re actually claiming this but with the pig/dog comparison you seem to be acknowledging that many people value dogs more than pigs (I’m not clear if you have this instinctive preference yourself or not) but that based on some abstract concept of levels of consciousness (that is itself subjective given our current knowledge) we should override our instincts and judge them as of equal value. I’m saying “screw the abstract theory, I value dogs over pigs and that’s sufficient moral justification for me”. I can give you rationalizations for my preference—the idea that dogs have been bred to live with humans for example—but ultimately I don’t think the rationalization is required for moral justification.
If this is true, then we should prefer our natural judgements (we value cute baby seals highly, that’s fine—what we’re really valuing is consciousness, not the fact that they share facial features with human babies and so trigger protective instincts). You can’t have it both ways—either we prefer dogs to pigs because they really are ‘more conscious’ or we should fight our instincts and value them equally because our instincts mislead us. I’d agree that what you call ‘consciousness’ or ‘awareness’ is a factor but I don’t think it’s the most important feature influencing our judgements. And I don’t see why it should be.
And it’s exactly this sort of thing that makes me inclined to reject utilitarian ethics. If following utilitarian ethics leads to morally objectionable outcomes I see no good reason to think the utilitarian position is right.