I’ve read the first part of the post (“What is Speciesism?”), and have a question.
Does your argument have any answer to applying modus tollens to the argument from marginal cases?
In other words, if I say: “Actually, I think it’s ok to kill/torture human newborns/infants; I don’t consider them to be morally relevant[1]” (likewise severely mentally disabled adults, likewise (some? most?) other marginal cases) — do you still expect your argument to sway me in any way? Or no?
[1] Note that I can still be in favor of laws that prohibit infanticide, for game-theoretic reasons. (For instance, because birth makes a good bright line, as does the species divide. The details of effective policy and optimal criminal justice laws are an interesting conversation to have but not of any great relevance to the moral debate.)
Edit: Having now read the rest of your post, I see that you… sort-of address this point. But to be honest, I don’t think you take the opposing position very seriously; I get the sense that you’ve constructed arguments that you think someone on the opposite side would make, if they held exactly your views in everything except, inexplicably, this one area, and these arguments you then knock down. In short, while I am very much in favor of having this discussion and think that this post is a good idea… I don’t think your argument passes the ideological Turing test. I would have preferred for you to, at least, directly address the challenges in this post.
I don’t think your argument passes the ideological Turing test. I would have preferred for you to, at least, directly address the challenges in this post.
The post you link to makes five points.
1) and 2) don’t concern the arguments I’m making because I left out empirical issues on purpose.
3) is also an empirical issue that can be applied to some humans as well.
4) is the most interesting one.
Something About Sapience Is What Makes Suffering Bad
I sort of addressed this here. I must say I’m not very familiar with this position so I might be bad at steelmanning it, but so far I simply don’t see why intelligence has anything to do with the badness of suffering.
As for 5), this is certainly a valid thing to point out when people are estimating whether a given being is sentient or not. Regarding the normative part of this argument: If there were cute robots that I have empathy for but was sure they aren’t sentient, I genuinely wouldn’t argue about giving them moral consideration.
No, this is indeed a common feature of coherentist reasoning, you can make it go both ways. I cannot logically show that you are making a mistake here. I may however appeal to shared intuitions or bring further arguments that could encourage you to reflect on your views.
And note that I was silent on the topic of killing, the point I made later in the article was only focused on caring about suffering. And there I think I can make a strong case that suffering is bad independently of where it happens.
It’s in the article. If you’re not impressed by it then I’m indeed out of arguments.
Furthermore, while D and E seem plausible candidates for reasons against killing a being with these properties (E is in fact Peter Singer’s view on the matter), none of the criteria from A to E seem relevant to suffering, to whether a being can be harmed or benefitted. The case for these being bottom-up morally relevant criteria for the relevance of suffering (or happiness) is very weak, to say the least.
Maybe that’s the speciesist’s central confusion, that the rationality/sapience of a being is somehow relevant for whether its suffering matters morally. Clearly, for us ourselves, this does not seem to be the case. If I was told that some evil scientist would first operate on my brain to (temporarily) lower my IQ and cognitive abilities, and then torture me afterwards, it is not like I will be less afraid of the torture or care less about averting it!
There’s also a hyperlink in the first paragraph referring to section 6 of the linked paper.
Ok. Yeah, I don’t find any of those to be strong arguments. Again, I would like to urge you to consider and address the points brought up in this post.
In other words, if I say: “Actually, I think it’s ok to kill/torture human newborns/infants; I don’t consider them to be morally relevant[1]” (likewise severely mentally disabled adults, likewise (some? most?) other marginal cases) — do you still expect your argument to sway me in any way? Or no?
No, that would be when we fetch the pitchforks.
[1] Note that I can still be in favor of laws that prohibit infanticide, for game-theoretic reasons. (For instance, because birth makes a good bright line, as does the species divide. The details of effective policy and optimal criminal justice laws are an interesting conversation to have but not of any great relevance to the moral debate.)
The only time I heard such an argument, it wasn’t their true rejection, and they invented several other such false rejections during the course of our discussion. So that would be my response on hearing someone actually make the unaddressed argument you outlined.
Do such game-theoretic reasons actually hold together, by the way? It seems unlikely, unless you suddenly start caring about children somewhere during their first few years.
The only time I heard such an argument, it wasn’t their true rejection, and they invented several other such false rejections during the course of our discussion. So that would be my response on hearing someone actually make the unaddressed argument you outlined.
Do such game-theoretic reasons actually hold together, by the way? It seems unlikely, unless you suddenly start caring about children somewhere during their first few years.
No, this is definitely my true rejection. To expand a bit, take the infanticide case as an example: I think infanticide should be illegal, but I don’t think it should be considered murder or anything close to it, nor punished nearly as severely.
Basically, there’s no “real” line between sapience and non-sapience, and humans, in the course of their development, start out as cognitively inert matter and end up as sapient beings. But since we don’t think evaluating in every single case is feasible, or reliable in the “border region” cases, or likely to lead to consistently (morally) good outcomes in practice (due to assorted cognitive and institutional limitations), we want to draw the line way back in the development process, where we’re sure there’s no sapience and killing the developing human is morally ok. Where specifically? Well, since this is a pragmatic and not a moral consideration, there is no unique morally ordained line placement, but there is a natural “bright line”: birth. Birth is more or less in the desired region of time, so that’s where we draw it.
Now, since we drew the line for pragmatic reasons, we are perfectly aware that the person who commits infanticide has not really done anything morally wrong. But on the other hand, we want to discourage people from redrawing the line on an individual basis, from “taking line placement into their own hands”, so to speak, because then we’re back to the “evaluating in every case is not a good idea” issue. But on the third hand, such discouragement should not take the form of putting the poor person in jail for murder! The problem is not that important; the well-being and happiness of an adult human for a large chunk of their life is worth more than the (nonzero, but small) chance that line degradation will lead to bad outcomes! Make it a lesser offense, and you’ve more or less got the best of both worlds. (Equivalent to assault, perhaps? I don’t know, this is a practical question, and best settled with the help of experts in criminal justice and public policy.)
I’ve read the first part of the post (“What is Speciesism?”), and have a question.
Does your argument have any answer to applying modus tollens to the argument from marginal cases?
In other words, if I say: “Actually, I think it’s ok to kill/torture human newborns/infants; I don’t consider them to be morally relevant[1]” (likewise severely mentally disabled adults, likewise (some? most?) other marginal cases) — do you still expect your argument to sway me in any way? Or no?
[1] Note that I can still be in favor of laws that prohibit infanticide, for game-theoretic reasons. (For instance, because birth makes a good bright line, as does the species divide. The details of effective policy and optimal criminal justice laws are an interesting conversation to have but not of any great relevance to the moral debate.)
Edit: Having now read the rest of your post, I see that you… sort-of address this point. But to be honest, I don’t think you take the opposing position very seriously; I get the sense that you’ve constructed arguments that you think someone on the opposite side would make, if they held exactly your views in everything except, inexplicably, this one area, and these arguments you then knock down. In short, while I am very much in favor of having this discussion and think that this post is a good idea… I don’t think your argument passes the ideological Turing test. I would have preferred for you to, at least, directly address the challenges in this post.
The post you link to makes five points.
1) and 2) don’t concern the arguments I’m making because I left out empirical issues on purpose.
3) is also an empirical issue that can be applied to some humans as well.
4) is the most interesting one.
I sort of addressed this here. I must say I’m not very familiar with this position so I might be bad at steelmanning it, but so far I simply don’t see why intelligence has anything to do with the badness of suffering.
As for 5), this is certainly a valid thing to point out when people are estimating whether a given being is sentient or not. Regarding the normative part of this argument: If there were cute robots that I have empathy for but was sure they aren’t sentient, I genuinely wouldn’t argue about giving them moral consideration.
Huh, a mainstream term for what LWers call a Schelling fence!
No, this is indeed a common feature of coherentist reasoning, you can make it go both ways. I cannot logically show that you are making a mistake here. I may however appeal to shared intuitions or bring further arguments that could encourage you to reflect on your views.
And note that I was silent on the topic of killing, the point I made later in the article was only focused on caring about suffering. And there I think I can make a strong case that suffering is bad independently of where it happens.
I would very much like to see that case made!
It’s in the article. If you’re not impressed by it then I’m indeed out of arguments.
There’s also a hyperlink in the first paragraph referring to section 6 of the linked paper.
Ok. Yeah, I don’t find any of those to be strong arguments. Again, I would like to urge you to consider and address the points brought up in this post.
I think the relevant response would be torturing human infants, and other marginal cases.
Yep, fair enough. I’ve changed my post to include this.
No, that would be when we fetch the pitchforks.
The only time I heard such an argument, it wasn’t their true rejection, and they invented several other such false rejections during the course of our discussion. So that would be my response on hearing someone actually make the unaddressed argument you outlined.
Do such game-theoretic reasons actually hold together, by the way? It seems unlikely, unless you suddenly start caring about children somewhere during their first few years.
No, this is definitely my true rejection. To expand a bit, take the infanticide case as an example: I think infanticide should be illegal, but I don’t think it should be considered murder or anything close to it, nor punished nearly as severely.
Basically, there’s no “real” line between sapience and non-sapience, and humans, in the course of their development, start out as cognitively inert matter and end up as sapient beings. But since we don’t think evaluating in every single case is feasible, or reliable in the “border region” cases, or likely to lead to consistently (morally) good outcomes in practice (due to assorted cognitive and institutional limitations), we want to draw the line way back in the development process, where we’re sure there’s no sapience and killing the developing human is morally ok. Where specifically? Well, since this is a pragmatic and not a moral consideration, there is no unique morally ordained line placement, but there is a natural “bright line”: birth. Birth is more or less in the desired region of time, so that’s where we draw it.
Now, since we drew the line for pragmatic reasons, we are perfectly aware that the person who commits infanticide has not really done anything morally wrong. But on the other hand, we want to discourage people from redrawing the line on an individual basis, from “taking line placement into their own hands”, so to speak, because then we’re back to the “evaluating in every case is not a good idea” issue. But on the third hand, such discouragement should not take the form of putting the poor person in jail for murder! The problem is not that important; the well-being and happiness of an adult human for a large chunk of their life is worth more than the (nonzero, but small) chance that line degradation will lead to bad outcomes! Make it a lesser offense, and you’ve more or less got the best of both worlds. (Equivalent to assault, perhaps? I don’t know, this is a practical question, and best settled with the help of experts in criminal justice and public policy.)