The kind of Vulcan you are imagining might have some kind of moral status, but is phenomenal consciousness really the crux here? Suppose there’s an otherwise identical Vulcan except no subjective experience—would there be no moral status in that case? Now, the obvious problem is of course that the coherence of this counterfactual is highly dubious to many. But my personal intuition is that to the extent I accept the counterfactual, making the Vulcan a zombie makes no moral difference. Whereas for beings with affective sentience, whether somebody goes around waving their magic wand and turning them into zombies or not intuitively seems morally significant.
There’s a lot of logical uncertainty here about the space of possible minds wrt phenomenal consciousness and affective experience, so there might be some kind of necessary connection between phenomenal consciousness (even when it’s totally non-affective) and features I associate with moral status for entities that lack affective sentience. But after I stopped assuming it’s always accompanied by valence, phenomenal consciousness in and of itself just prima facie doesn’t seem morally very important at all. Yes, destroying a planet of Vulcans to save a shrimp seems monstrous, but so does destroying a planet of entities that have sophisticated behavior and cognition without subjective experience.
I explicitly left zombies out of the post since zombie possibility is contentious and my intuition around their moral status is much less clear.
You might enjoy reading through one of the papers I linked in the post by Joshua Shepherd where he lands on a view in which conscious is not necessary for moral status. The thought is crystallised by thinking about robots (which unlike zombies are not exact duplicates of humans) and he notes that the intuition is still unclear:
From the paper:
> Imagine that you are an Earth scientist, eager to learn more about the makeup of these robots. So you capture a small one—very much against its protests—and you are about to cut it open to examine its insides, when another robot, its mother, comes racing up to you, desperately pleading with you to leave it alone. She begs you not to kill it, mixing angry assertions that you have no right to treat her child as though it were a mere thing, with emotional pleas to let it go before you harm it any further. Would it be wrong to dissect the child? (2019, 28).
Kagan ofers a non-necessitarian judgment: ‘I fnd that I have no doubt whatsoever that it would be wrong to kill (or, if you prefer, to destroy) the child in a case like this. It simply doesn’t matter to me that the child and its mother are “mere” robots, lacking in sentience… For you to destroy such a machine really would be morally horrendous’ (28).
In response, Kriegel (forthcoming) ofers the opposite judgment:
> No matter how many experiential terms the vignette is surreptitiously peppered with (“desperately,” “angry,” “emotional”), and how many automatized projections it counts on from what such behavior in conscious beings indicates about their likely experiential state, one would have to be seriously confused to think that one is in any way harming a collection of metal plates by intervening in the metal’s internal organization (forthcoming).
When cases generate sharply conficting judgments across a set of very sharp philosophers, it can be difcult to know how to proceed.
I think the anti-robot-killing stance makes sense for two reasons, but one of them is technically fighting the hypothetical and one is answering a slightly different question:
It’s literally impossible for you to have justified certainty that they lack sentience. (And not just in the ‘1 is not a probability’ sense, i.e. you always need to leave room for new evidence to change your mind; but because the thing in question is inherently subjective and can only be reliably detected ‘from the inside’.)
Even if they do lack sentience, there are two morally relevant questions here:
is the action bad?
does the action reflect badly on the actor?
If it really doesn’t cause any harm, and the person doing it somehow knows that for sure, then I don’t think the act of ‘killing’ a non-sentient robot is bad. But most good people would have very strong instinctive qualms about doing something that looks and feels and sounds so similar to murdering a conscious being and deeply psychologically harming another, and if I learn that someone has done it, I’m going to update towards their being a low-empathy or actively sadistic person.
I agree that both of these responses try to get around the hypothetical a little, but I think they’re both really sensible practical suggestions and I strongly agree with where you landed.
Repeat the robots question, except ask the question about video game characters instead. Now, there are some game characters that have very simple patterns of behavior, but there are some that are a lot more complex, even if still describable by a set of algorithms. I’m sure there are characters that can and will beg you not to kill them. Is it wrong to play the video game and sacrifice the video game characters?
I’d reject the analogy between Vulcans and video game players.
Vulcans can freely interact with their environments and have goals/desires which can be promoted/thwarted. It’s not clear that any of these hold for video game characters.
If you made the character sophisticated enough that the algorithm in the game realised a conscious mind capable of interacting with the environment and having goals then I think I’d bite the bullet and say it’s wrong to kill the character gratuitously.
You’re thinking of a p-Vulcan as a slight variation on a human. But the context includes:
Remember, it’s highly likely that shrimps have some form of phenomenal consciousness and experience some form of suffering. Shrimp suffering is bad. Even though we lack the capability to accurately predict its intensity, the shrimp certainly suffers more than the Vulcan would. Vulcans totally lack the capacity to suffer.
A p-Vulcan doesn’t have to be very humanlike. It only needs to be shrimplike, but without the ability to suffer. I do think that many existing video game characters would qualify, by that standard, as similar to a p-Vulcan. Is it wrong to kill those video game characters?
The kind of Vulcan you are imagining might have some kind of moral status, but is phenomenal consciousness really the crux here? Suppose there’s an otherwise identical Vulcan except no subjective experience—would there be no moral status in that case? Now, the obvious problem is of course that the coherence of this counterfactual is highly dubious to many. But my personal intuition is that to the extent I accept the counterfactual, making the Vulcan a zombie makes no moral difference. Whereas for beings with affective sentience, whether somebody goes around waving their magic wand and turning them into zombies or not intuitively seems morally significant.
There’s a lot of logical uncertainty here about the space of possible minds wrt phenomenal consciousness and affective experience, so there might be some kind of necessary connection between phenomenal consciousness (even when it’s totally non-affective) and features I associate with moral status for entities that lack affective sentience. But after I stopped assuming it’s always accompanied by valence, phenomenal consciousness in and of itself just prima facie doesn’t seem morally very important at all. Yes, destroying a planet of Vulcans to save a shrimp seems monstrous, but so does destroying a planet of entities that have sophisticated behavior and cognition without subjective experience.
Interesting thought.
I explicitly left zombies out of the post since zombie possibility is contentious and my intuition around their moral status is much less clear.
You might enjoy reading through one of the papers I linked in the post by Joshua Shepherd where he lands on a view in which conscious is not necessary for moral status. The thought is crystallised by thinking about robots (which unlike zombies are not exact duplicates of humans) and he notes that the intuition is still unclear:
From the paper:
I think the anti-robot-killing stance makes sense for two reasons, but one of them is technically fighting the hypothetical and one is answering a slightly different question:
It’s literally impossible for you to have justified certainty that they lack sentience. (And not just in the ‘1 is not a probability’ sense, i.e. you always need to leave room for new evidence to change your mind; but because the thing in question is inherently subjective and can only be reliably detected ‘from the inside’.)
Even if they do lack sentience, there are two morally relevant questions here:
is the action bad?
does the action reflect badly on the actor?
If it really doesn’t cause any harm, and the person doing it somehow knows that for sure, then I don’t think the act of ‘killing’ a non-sentient robot is bad. But most good people would have very strong instinctive qualms about doing something that looks and feels and sounds so similar to murdering a conscious being and deeply psychologically harming another, and if I learn that someone has done it, I’m going to update towards their being a low-empathy or actively sadistic person.
I agree that both of these responses try to get around the hypothetical a little, but I think they’re both really sensible practical suggestions and I strongly agree with where you landed.
Repeat the robots question, except ask the question about video game characters instead. Now, there are some game characters that have very simple patterns of behavior, but there are some that are a lot more complex, even if still describable by a set of algorithms. I’m sure there are characters that can and will beg you not to kill them. Is it wrong to play the video game and sacrifice the video game characters?
I’d reject the analogy between Vulcans and video game players.
Vulcans can freely interact with their environments and have goals/desires which can be promoted/thwarted. It’s not clear that any of these hold for video game characters.
If you made the character sophisticated enough that the algorithm in the game realised a conscious mind capable of interacting with the environment and having goals then I think I’d bite the bullet and say it’s wrong to kill the character gratuitously.
You’re thinking of a p-Vulcan as a slight variation on a human. But the context includes:
A p-Vulcan doesn’t have to be very humanlike. It only needs to be shrimplike, but without the ability to suffer. I do think that many existing video game characters would qualify, by that standard, as similar to a p-Vulcan. Is it wrong to kill those video game characters?