My current view is that most animals are not people, in the sense that they are not subject to moral concern. Of course, I do get upset when I see things such as animal abuse, but it seems to me that helping animals only nets me warm fuzzy feelings. I know animals react to suffering in a manner that we can sympathize with, but it just seems to me that they are still just running a program that is “below” that of humans. I think I feel that “react to pain” does not equal “worthy of moral consideration.” The only exceptions to this in my eyes may be “higher mammals” such as other primates. Yet others on this site have advocated concern for animal welfare. Where am I confused?
First thing to note is that “worthy of moral consideration” is plausibly a scalar. The philosophical & scientific challenges involved in defining it are formidable, but in my books it has something to do with to what extent a non-human animal experiences suffering. So I am much less concerned with hurting a mosquito than a gorilla, because I suspect mosquitoes do not experience much of anything, but I suspect gorillas do.
Although I think ability to suffer is correlated with intelligence, it’s difficult to know whether it scales with intelligence in a simple way. Sure, a gorilla is better than a mouse at problem-solving, but that doesn’t make it obvious that it suffers more.
Consider the presumed evolutionary functional purpose of suffering, as a motivator for action. Assuming the experience of suffering does not require very advanced cognitive architecture, why would a mouse necessarily experience vastly less suffering that a more intelligent gorilla? It needs the motivation just as much.
To sum up, I have a preference for creatures that can experience suffering to not suffer gratuitously, as I suspect that many do (although the detailed philosophy behind this suspicion is muddy to say the least). Thus, utilitarian veganism, and also the unsolved problem of what the hell to do about the “Darwinian holocaust.”
I have no idea what I consider a person to be. I think that I wish it was binary because that would be neat and pretty and make moral questions a lot easier to answer. But I think that it probably isn’t. Right now I feel as though what separates person from nonperson is totally arbitrary.
It seems as though we evolved methods of feeling sympathy for others, and now we attempt to make a logical model from that to define things as people. It’s like “person” is an unsound concept that cannot be organized into an internally consistent system. Heck, I’m actually starting to feel like all of human nature is an internally inconsistent mess doomed to never make sense.
Are you confused? It seems like you recognize that you have somewhat different values than other people. Do you think everyone should have the same values? In that case all but one of the views is wrong. On the other hand, if values can be something that’s different between people it’s legitimate for some people to care about animals and others not to.
I am VERY confused. I suspect that some people can value some things differently, but it seems as though there should be a universal value system among humans as well. The thing that distinguishes “person” from “object” seems to belong to the latter.
Three hypothesis which may not be mutually exclusive:
1) Some people disagree (with you) about whether or not some animals are persons.
2) Some people disagree (with you) about whether or not being a person is a necessary condition for moral consideration—here you’ve stipulated ‘people’ as ‘things subject to moral concern’, but that word may too connotative laden for this to be effective.
3) Some people disagree (with you) about ‘person’/‘being worthy of moral consideration’ being a binary category.
I think you are confused in thinking that humans are somehow not just also running a program that reacts to pain and whatnot.
You feel sympathy for animals, and more sympathy for humans. I don’t think that requires any special explanation or justification, especially when doing so results in preferences or assertions that are stupid: “I don’t care about animals at all because animals and humans are ontologically distinct.”
Why not just admit that you care about both, just differently, and do whatever seems best from there?
Perhaps just taking your apparent preferences at fact value like that you run into some kind of specific contradiction, or perhaps not. If you do, then you at least have a concrete muddle to resolve.
Well I certainly feel very confused. I generally do feel that way when pondering anything related to morality. The whole concept of what is the right thing to do feels like a complete mess and any attempts to figure it out just seem to add to the mess. Yet I still feel very strongly compelled to understand it. It’s hard to resist the urge to just give up and wait until we have a detailed neurological model of a human brain and are able to construct a mathematical model from that which would explain exactly what I am asking when I ask what is right and what the answer is.
I would guess that you’re not a utilitarian and a lot of LWers are. The standard utilitarian position is that all that matters is the interests of beings, and beings’ utility is weighed equally regardless of what those beings are. One “unit” of suffering (or utility) generated by an animal is equal to the same unit generated by a human.
There’s a continuum of.. mental complexity, to name something random, between modern dolphins and rocks. Homo sapiens also fits on that curve somewhere.
You might argue that mental complexity is not the right parameter to use, but unless you’re going to argue that rocks are deserving of utility you’ll have to agree to either an arbitrary cut-off point or some mapping between $parameter and utility-deservingness, practically all possible such parameters having a similar continuous curve.
As I understand it, a util is equal regardless of what generates it, but the ability to generate utils out of states of the world varies from species to species. A rock doesn’t experience utility, but dogs and humans do. If a rock could experience utility, it would be equally deserving of it.
I would guess that you’re not a utilitarian and a lot of LWers are.
I’m almost certain this is false for the definition of “utilitarianism” you give in the next sentence.
There is unfortunately a lot of confusion between two different senses of the word “utilitarianism”. The definition you give, and the more general sense of any morality system that uses a utility function.
I generally consider myself to be a utilitarian, but I only apply that utilitarianism to things that have the property of personhood. But I’m beginning to see that things aren’t so simple.
I’ve seen “utilitarianism” used to denote both “my utility is the average/[normalized sum] of the utility of each person, plus my exclusive preferences” and “my utility is a weighted sum/average of the utility of a bunch of entities, plus my exclusive preferences”. I’m almost sure that few LWers would claim to be utilitarians in the former sense, especially since most people round here believe minds are made of atoms and thus not very discrete.
I mean, we can add/remove small bits from minds, and unless personhood is continuous (which would imply the second sense of utilitarianism), one tiny change in the mind would have to suddenly shift us from fully caring about a mind to not caring about it at all, which doesn’t seem to be what humans do. This is an instance of the Sorites “paradox”.
(One might argue that utilities are only defined up to affine transformation, but when I say “utility” I mean the thing that’s like utility except it’s comparable between agents. Now that I think about it, you might mean that we’ve defined persons’ utility such that every util is equal in the second sense of the previous sentence, but I don’t think you meant that.)
Utilitarianism is normative, so it means that your utility should be the average of the utility of all beings capable of experiencing it, regardless of whether your utility currently is that. If it becomes a weighted average, it ceases to be utilitarianism, because it involves considerations other than the maximization of utility.
one tiny change in the mind would have to suddenly shift us from fully caring about a mind to not caring about it at all, which doesn’t seem to be what humans do
Consider how much people care about the living compared to the dead. I think that’s a counterexample to your claim.
My current view is that most animals are not people, in the sense that they are not subject to moral concern. Of course, I do get upset when I see things such as animal abuse, but it seems to me that helping animals only nets me warm fuzzy feelings. I know animals react to suffering in a manner that we can sympathize with, but it just seems to me that they are still just running a program that is “below” that of humans. I think I feel that “react to pain” does not equal “worthy of moral consideration.” The only exceptions to this in my eyes may be “higher mammals” such as other primates. Yet others on this site have advocated concern for animal welfare. Where am I confused?
First thing to note is that “worthy of moral consideration” is plausibly a scalar. The philosophical & scientific challenges involved in defining it are formidable, but in my books it has something to do with to what extent a non-human animal experiences suffering. So I am much less concerned with hurting a mosquito than a gorilla, because I suspect mosquitoes do not experience much of anything, but I suspect gorillas do.
Although I think ability to suffer is correlated with intelligence, it’s difficult to know whether it scales with intelligence in a simple way. Sure, a gorilla is better than a mouse at problem-solving, but that doesn’t make it obvious that it suffers more.
Consider the presumed evolutionary functional purpose of suffering, as a motivator for action. Assuming the experience of suffering does not require very advanced cognitive architecture, why would a mouse necessarily experience vastly less suffering that a more intelligent gorilla? It needs the motivation just as much.
To sum up, I have a preference for creatures that can experience suffering to not suffer gratuitously, as I suspect that many do (although the detailed philosophy behind this suspicion is muddy to say the least). Thus, utilitarian veganism, and also the unsolved problem of what the hell to do about the “Darwinian holocaust.”
Do you think that all humans are persons? What about unborn children? A 1 year old? A mentally handicapped person?
What your criteria for granting personhood. Is it binary?
I have no idea what I consider a person to be. I think that I wish it was binary because that would be neat and pretty and make moral questions a lot easier to answer. But I think that it probably isn’t. Right now I feel as though what separates person from nonperson is totally arbitrary.
It seems as though we evolved methods of feeling sympathy for others, and now we attempt to make a logical model from that to define things as people. It’s like “person” is an unsound concept that cannot be organized into an internally consistent system. Heck, I’m actually starting to feel like all of human nature is an internally inconsistent mess doomed to never make sense.
Are you confused? It seems like you recognize that you have somewhat different values than other people. Do you think everyone should have the same values? In that case all but one of the views is wrong. On the other hand, if values can be something that’s different between people it’s legitimate for some people to care about animals and others not to.
I am VERY confused. I suspect that some people can value some things differently, but it seems as though there should be a universal value system among humans as well. The thing that distinguishes “person” from “object” seems to belong to the latter.
Is that a normative ‘should’ or a descriptive ‘should’?
If the latter, where would it come from? :-)
Three hypothesis which may not be mutually exclusive:
1) Some people disagree (with you) about whether or not some animals are persons.
2) Some people disagree (with you) about whether or not being a person is a necessary condition for moral consideration—here you’ve stipulated ‘people’ as ‘things subject to moral concern’, but that word may too connotative laden for this to be effective.
3) Some people disagree (with you) about ‘person’/‘being worthy of moral consideration’ being a binary category.
I think you are confused in thinking that humans are somehow not just also running a program that reacts to pain and whatnot.
You feel sympathy for animals, and more sympathy for humans. I don’t think that requires any special explanation or justification, especially when doing so results in preferences or assertions that are stupid: “I don’t care about animals at all because animals and humans are ontologically distinct.”
Why not just admit that you care about both, just differently, and do whatever seems best from there?
Perhaps just taking your apparent preferences at fact value like that you run into some kind of specific contradiction, or perhaps not. If you do, then you at least have a concrete muddle to resolve.
Why do you assume you’re confused?
Well I certainly feel very confused. I generally do feel that way when pondering anything related to morality. The whole concept of what is the right thing to do feels like a complete mess and any attempts to figure it out just seem to add to the mess. Yet I still feel very strongly compelled to understand it. It’s hard to resist the urge to just give up and wait until we have a detailed neurological model of a human brain and are able to construct a mathematical model from that which would explain exactly what I am asking when I ask what is right and what the answer is.
I would guess that you’re not a utilitarian and a lot of LWers are. The standard utilitarian position is that all that matters is the interests of beings, and beings’ utility is weighed equally regardless of what those beings are. One “unit” of suffering (or utility) generated by an animal is equal to the same unit generated by a human.
If “a lot” means “a minority”.
Well, no, that can’t be right.
There’s a continuum of.. mental complexity, to name something random, between modern dolphins and rocks. Homo sapiens also fits on that curve somewhere.
You might argue that mental complexity is not the right parameter to use, but unless you’re going to argue that rocks are deserving of utility you’ll have to agree to either an arbitrary cut-off point or some mapping between $parameter and utility-deservingness, practically all possible such parameters having a similar continuous curve.
As I understand it, a util is equal regardless of what generates it, but the ability to generate utils out of states of the world varies from species to species. A rock doesn’t experience utility, but dogs and humans do. If a rock could experience utility, it would be equally deserving of it.
Fair enough.
~~~
I’m still not sure I agree, but I’ll need to think about it.
I’m almost certain this is false for the definition of “utilitarianism” you give in the next sentence.
There is unfortunately a lot of confusion between two different senses of the word “utilitarianism”. The definition you give, and the more general sense of any morality system that uses a utility function.
I thought the latter was just called “consequentialism”.
In practice I’ve seen “utilitarianism” used to refer to both positions, as well as a lot of positions in between.
I generally consider myself to be a utilitarian, but I only apply that utilitarianism to things that have the property of personhood. But I’m beginning to see that things aren’t so simple.
Do corporation who are legally persons count?
I’ve seen “utilitarianism” used to denote both “my utility is the average/[normalized sum] of the utility of each person, plus my exclusive preferences” and “my utility is a weighted sum/average of the utility of a bunch of entities, plus my exclusive preferences”. I’m almost sure that few LWers would claim to be utilitarians in the former sense, especially since most people round here believe minds are made of atoms and thus not very discrete.
I mean, we can add/remove small bits from minds, and unless personhood is continuous (which would imply the second sense of utilitarianism), one tiny change in the mind would have to suddenly shift us from fully caring about a mind to not caring about it at all, which doesn’t seem to be what humans do. This is an instance of the Sorites “paradox”.
(One might argue that utilities are only defined up to affine transformation, but when I say “utility” I mean the thing that’s like utility except it’s comparable between agents. Now that I think about it, you might mean that we’ve defined persons’ utility such that every util is equal in the second sense of the previous sentence, but I don’t think you meant that.)
Utilitarianism is normative, so it means that your utility should be the average of the utility of all beings capable of experiencing it, regardless of whether your utility currently is that. If it becomes a weighted average, it ceases to be utilitarianism, because it involves considerations other than the maximization of utility.
Consider how much people care about the living compared to the dead. I think that’s a counterexample to your claim.