While I was writing this comment, CarlShulman posted his, which makes essentially the same point. But since I already wrote it a longer comment, I’m posting mine too. (Writing quickly is hard!)
In practice we must have a quantitative model of how much “moral value” to assign an animal (or human). I think your position that:
x amount of suffering is just as bad, i.e. that we care about it just as much, in nonhuman animals as in humans, or in aliens or in uploads.
Is wrong, and the reasons for that fall out of your own arguments.
As you point out, there is a continuum between any two living things (common descent). Nevertheless we all think that at least some animals have zero, or nearly zero, moral weight: insects, perhaps, but you can go all the way to amoebas. You must either 1) assign gradually diminishing moral value to beings ranged from humans to amoebas; or 2) choose an arbitrary, precise point (or several points) at which the value decreases sharply, with modern species boundaries being an obvious Schelling point but not the only option. Similar arguments have of course been made about the continuum between a sperm and an egg, and an eventual human being.
Option 1 lets you assign non-human animals moral value. But then, you must specify the criteria you use to calculate that value, from your list A-G or otherwise. These same criteria will then tell you that some humans have less moral value then others: children, people with advanced dementia or other severe mental deficiencies, etc. Some biological humans may have much less value than, say, a chicken (babies), or none at all (fetuses). Also, at least some post-humans, aliens, and AIs would have far more moral value than any human—even to the point of becoming utility monsters for total utilitarians.
Option 2 is completely arbitrary in terms of what animals you value, so (among its other problems) people won’t be able to agree about it. And if you don’t determine moral value by measuring some underlying property, you won’t be able to determine the value of radical new varieties, such as post-humans or AIs.
You seem to support option 2 (value everyone equally) but you don’t say where you draw the line—and that’s the crucial question.
My own position is option 1, open to modification against failure modes like utility monsters that would conflict too strongly with my other moral intuitions.
The claim is that there is no way to block this conclusion without:
using reasoning that could analogically be used to justify racism or sexism
or
using reasoning that allows for hypothetical circumstances where it would be okay (or even called for) to torture babies in cases where utilitarian calculations prohibit it.
My reasoning can’t justify racism and sexism, because my moral criteria don’t differ noticeably between sexes and races. This is an empirical fact. If it were true that e.g. some race was less sentient than other races, then that would be a valid reason to assign people of that race less moral value. But it’s just not true.
I don’t understand what you mean by (2), could you give an example? If a utilitarian calculation forbids you from doing something, then what could be your reason for doing it anyway? Your utility function can’t be separate from your morals; on the contrary it must incorporate your morals. (Inconsistent morals are a problem, but without a single VNM-compliant utility function, utilitarianism can’t tell you anything at all.)
Some other notes:
H: What I care about / feel sympathy or loyalty towards
I would like to note that this is actual basis of almost all human moral reasoning, and all the rest is post-facto rationalizations. When those rationalizations come in conflict with moral intuitions, they are labelled “repugnant conclusions”. I think you dismiss this factor far too lightly.
those not willing to bite the bullet about torturing babies are forced by considerations of consistency to care about animal suffering just as much as they care about human suffering.
I am willing to bite the bullet about babies, quite easily in fact. I assign no more value to newborn human babies than I do to chickens. I only care about babies insofar other humans care about babies.
I do care about animal suffering—in proportion to some of the measures A-G on your list, so less than human suffering, but (for many animals) more than human baby suffering.
I wouldn’t mind treating babies like we treat some farm animals; that is not because I value those animals as highly as I do humans, but because I value both babies and humans much less than I do adult humans. (Some farming methods are acceptable to me, and some are not.)
A sentient being is one for whom “it feels like something to be that being”.
Please play rationalist’s taboo here. What empirical test or physical fact tells you whether “it feels like something” to be a certain animal? And moreover, quantitatively so—“how much” it feels like something to be that animal?
I have not given a reason why torturing babies or racism is bad or wrong. I’m hoping that the vast majority of people will share that intuition/value of mine, that they want to be the sort of person who would have been amongst those challenging racist or sexist prejudices, had they lived in the past.
Baby-ism and racism have nothing in common (except that you’re against both). I don’t assign human-level moral status to babies, but I’m not a racist. This is precisely because humans of all races have roughly the same distribution on A-G (and other relevant parameters), whereas newborn babies would test below adults of most (all?) mammalian species.
x amount of suffering is just as bad, i.e. that we care about it just as much, in nonhuman animals as in humans, or in aliens or in uploads.
By this I meant literally the same amount (and intensity!) of suffering. So I agree with the point you and Carl Shulman make, if it is the case that some animals can only experience so much suffering, then it makes sense to value them accordingly.
You must either 1) assign gradually diminishing moral value to beings ranged from humans to amoebas; or 2) choose an arbitrary, precise point (or several points) at which the value decreases sharply, with modern species boundaries being an obvious Schelling point but not the only option.
I’m arguing for 1), but I would only do it by species in order to save time for calculations. If I had infinite computing power, I would do the calculation for each individual separately according to indicators of what constitutes capacity for suffering and its intensity. Incidentally, I would also assign at least a 20% chance that brain size doesn’t matter, some people in fact have this view.
I don’t understand what you mean by (2), could you give an example? If a utilitarian calculation forbids you from doing something, then what could be your reason for doing it anyway?
By “utilitarianism” I meant hedonistic utilitarianism in general, not your personal utility function that (in this scenario) differentiates between sapience and mere sentience. I added this qualifier because “you’d have to be okay with torturing babies” is not a reductio, since utilitarians would have to bite this bullet anyway if they could thereby prevent an even greater amount of suffering in the future.
Please play rationalist’s taboo here. What empirical test or physical fact tells you whether “it feels like something” to be a certain animal? And moreover, quantitatively so—“how much” it feels like something to be that animal?>>
I only have my first-person evidence to go with. This bothers me a lot but I’m assuming that some day we will have solved all the problems in philosophy of mind and can then map out what we mean precisely by “sentience”, having it correspond to specific implemented algorithms or brain states.
Baby-ism and racism have nothing in common (except that you’re against both). I don’t assign human-level moral status to babies, but I’m not a racist. This is precisely because humans of all races have roughly the same distribution on A-G (and other relevant parameters), whereas newborn babies would test below adults of most (all?) mammalian species.
I agree, those are simply the two premises the conclusion that we should value all suffering equally is based on. You end up with coherent positions by rejection one or both of the two.
I only have my first-person evidence to go with. This bothers me a lot but I’m assuming that some day we will have solved all the problems in philosophy of mind and can then map out what we mean precisely by “sentience”, having it correspond to specific implemented algorithms or brain states.
What evidence do you have for thinking that your first-person intuitions about sentience “cut reality at its joints”? Maybe if you analyze what goes through your head when you think “sentience”, and then try to apply that to other animals (never mind AIs or aliens), you’ll just end up measuring how different those animals are from humans in a completely arbitrary and morally-unimportant implementation feature.
If after solving all the problems of philosophy you found out something like this, would you accept it, or would you say that “sentience” was no longer the basis of your morals? In other words, why might you prefer this particular intuition to other intuitions that judge how similar something is to a human?
If I understand it correctly, this is the position endorsed here. I don’t think realizing that this view is right would change much for me; I would still try to generalize criteria for why I care about a particular experience and then care about all instances of the same thing. However, I realize that this would make it much more difficult to convince others to draw the same lines. If the question of whether a given being is sentience translates into whether I have reasons to care about that being, then one part of my argument would fall away. This issue doesn’t seem to be endemic to the treatment of non-human animals though, you’d have it with any kind of utility function that values well-being.
While I was writing this comment, CarlShulman posted his, which makes essentially the same point. But since I already wrote it a longer comment, I’m posting mine too. (Writing quickly is hard!)
In practice we must have a quantitative model of how much “moral value” to assign an animal (or human). I think your position that:
Is wrong, and the reasons for that fall out of your own arguments.
As you point out, there is a continuum between any two living things (common descent). Nevertheless we all think that at least some animals have zero, or nearly zero, moral weight: insects, perhaps, but you can go all the way to amoebas. You must either 1) assign gradually diminishing moral value to beings ranged from humans to amoebas; or 2) choose an arbitrary, precise point (or several points) at which the value decreases sharply, with modern species boundaries being an obvious Schelling point but not the only option. Similar arguments have of course been made about the continuum between a sperm and an egg, and an eventual human being.
Option 1 lets you assign non-human animals moral value. But then, you must specify the criteria you use to calculate that value, from your list A-G or otherwise. These same criteria will then tell you that some humans have less moral value then others: children, people with advanced dementia or other severe mental deficiencies, etc. Some biological humans may have much less value than, say, a chicken (babies), or none at all (fetuses). Also, at least some post-humans, aliens, and AIs would have far more moral value than any human—even to the point of becoming utility monsters for total utilitarians.
Option 2 is completely arbitrary in terms of what animals you value, so (among its other problems) people won’t be able to agree about it. And if you don’t determine moral value by measuring some underlying property, you won’t be able to determine the value of radical new varieties, such as post-humans or AIs.
You seem to support option 2 (value everyone equally) but you don’t say where you draw the line—and that’s the crucial question.
My own position is option 1, open to modification against failure modes like utility monsters that would conflict too strongly with my other moral intuitions.
My reasoning can’t justify racism and sexism, because my moral criteria don’t differ noticeably between sexes and races. This is an empirical fact. If it were true that e.g. some race was less sentient than other races, then that would be a valid reason to assign people of that race less moral value. But it’s just not true.
I don’t understand what you mean by (2), could you give an example? If a utilitarian calculation forbids you from doing something, then what could be your reason for doing it anyway? Your utility function can’t be separate from your morals; on the contrary it must incorporate your morals. (Inconsistent morals are a problem, but without a single VNM-compliant utility function, utilitarianism can’t tell you anything at all.)
Some other notes:
I would like to note that this is actual basis of almost all human moral reasoning, and all the rest is post-facto rationalizations. When those rationalizations come in conflict with moral intuitions, they are labelled “repugnant conclusions”. I think you dismiss this factor far too lightly.
I am willing to bite the bullet about babies, quite easily in fact. I assign no more value to newborn human babies than I do to chickens. I only care about babies insofar other humans care about babies.
I do care about animal suffering—in proportion to some of the measures A-G on your list, so less than human suffering, but (for many animals) more than human baby suffering.
I wouldn’t mind treating babies like we treat some farm animals; that is not because I value those animals as highly as I do humans, but because I value both babies and humans much less than I do adult humans. (Some farming methods are acceptable to me, and some are not.)
Please play rationalist’s taboo here. What empirical test or physical fact tells you whether “it feels like something” to be a certain animal? And moreover, quantitatively so—“how much” it feels like something to be that animal?
Baby-ism and racism have nothing in common (except that you’re against both). I don’t assign human-level moral status to babies, but I’m not a racist. This is precisely because humans of all races have roughly the same distribution on A-G (and other relevant parameters), whereas newborn babies would test below adults of most (all?) mammalian species.
By this I meant literally the same amount (and intensity!) of suffering. So I agree with the point you and Carl Shulman make, if it is the case that some animals can only experience so much suffering, then it makes sense to value them accordingly.
I’m arguing for 1), but I would only do it by species in order to save time for calculations. If I had infinite computing power, I would do the calculation for each individual separately according to indicators of what constitutes capacity for suffering and its intensity. Incidentally, I would also assign at least a 20% chance that brain size doesn’t matter, some people in fact have this view.
By “utilitarianism” I meant hedonistic utilitarianism in general, not your personal utility function that (in this scenario) differentiates between sapience and mere sentience. I added this qualifier because “you’d have to be okay with torturing babies” is not a reductio, since utilitarians would have to bite this bullet anyway if they could thereby prevent an even greater amount of suffering in the future.
I only have my first-person evidence to go with. This bothers me a lot but I’m assuming that some day we will have solved all the problems in philosophy of mind and can then map out what we mean precisely by “sentience”, having it correspond to specific implemented algorithms or brain states.
I agree, those are simply the two premises the conclusion that we should value all suffering equally is based on. You end up with coherent positions by rejection one or both of the two.
What evidence do you have for thinking that your first-person intuitions about sentience “cut reality at its joints”? Maybe if you analyze what goes through your head when you think “sentience”, and then try to apply that to other animals (never mind AIs or aliens), you’ll just end up measuring how different those animals are from humans in a completely arbitrary and morally-unimportant implementation feature.
If after solving all the problems of philosophy you found out something like this, would you accept it, or would you say that “sentience” was no longer the basis of your morals? In other words, why might you prefer this particular intuition to other intuitions that judge how similar something is to a human?
If I understand it correctly, this is the position endorsed here. I don’t think realizing that this view is right would change much for me; I would still try to generalize criteria for why I care about a particular experience and then care about all instances of the same thing. However, I realize that this would make it much more difficult to convince others to draw the same lines. If the question of whether a given being is sentience translates into whether I have reasons to care about that being, then one part of my argument would fall away. This issue doesn’t seem to be endemic to the treatment of non-human animals though, you’d have it with any kind of utility function that values well-being.