Zombies would also not be able to tell they’re not zombies – which leads to the disconcerting question – how do you know you’re not a zombie?
I know that I’m not a zombie since I have consciousness, which in my case at this moment includes consciousness of knowing that I have consciousness. Anything that can be conscious of knowing that they are conscious is not a zombie by definition.
A zombie would not know that it is conscious. It would also not know that it is not conscious. It would not even know that it is uncertain about being conscious or not. Knowing is a property of a mind, and by definition of “zombie” there is no mind there to know with.
Naturally a p-zombie behaves as if it can know things to all levels of external inspection, though whether p-zombies can exist is an unsettled question.
I know that I’m not a zombie since I have consciousness
Yes, but this is exactly what a zombie would say. Sure, in your case you presumably have direct access to your conscious experience that a zombie doesn’t have, but the rhetorical point I’m making in the post is that a zombie would believe it has phenomenal consciousness with the same conviction you have and when asked to justify it’s conviction it would point to the same things you do.
You didn’t (rhetorically) ask what a zombie would say. You asked how do I know. That fact that something else might say the same thing as I did is not at all disconcerting to me, and I’m not sure why it’s disconcerting to you.
You don’t need to go anything like as far as p-zombies to get something that says the same thing. A program consisting of print(“I know that I’m not a zombie since I have consciousness”) etc does the same thing.
The rhetorical point you’re making is simply false. A p-zombie would not believe anything, since it has no mind with which to believe. It would merely act in the same manner as something that has a mind.
Look, I appreciate the pushback, but I think you’re pressing a point which is somewhat tangential and not load-bearing for my position.
I agree that zombies have no mental states so, by definition, they can’t “believe” anything.
The point is, when you say “I know I’m conscious” you think you’re appealing to your direct phenomenal experience. Fine. But the zombie produces the exact same utterance, not by appealing to its phenomenal experience but through a purely physical/functional process which is a duplicate process to the one running in your brain. In this case, the thing which is doing the causal work to produce the utterances must be the physical/functional profile of their brain, not the phenomena itself.
So if the zombie argument is correct, you think you’re appealing to the phenomenal aspect of consciousness to determine the truth of your consciousness but you’re actually using the physical/functional profile of your brain. Hence my rhetorical point at the start of the article; if the zombie argument is correct then how do you know you’re not a zombie? The solution is that the zombie argument isn’t correct.
In the article, I also propose Russelian monism which takes the phenomenal aspect of consciousness seriously. In this way, you’d know the truth of your consciousness by introspecting because you’d have direct access to it. So again, the point you’re pressing is actually correct—you would indeed know that you’re not a zombie because you have access to your phenomenal consciousness.
A program consisting of print(“I know that I’m not a zombie since I have consciousness”) etc does the same thing.
No it doesn’t. The functional/physical profile of a print statement isn’t similar to the human brain. I’m also not sure why this point is relevant.
No problem, if it’s tangential then we can agree that it’s tangential. We also appear to agree that the zombie argument may not be a useful way of thinking about things, as referred to in the last sentence of my first comment.
I agree that if a p-zombie of me can exist, then my consciousness would not be the sole cause for things with my pattern of matter saying that they are conscious. It may still be a cause, in that my consciousness may be causally upstream of having a pattern of matter that emits such utterances, but there may be other ways that such patterns of matter can come to exist as well.
Could you clarify what it means for “the zombie argument” to be correct/incorrect? The version I have in mind (and agree with) is, roughly, ‘p-zombies are conceivable; therefore, we can’t know a priori that facts about the physical world entail, or are identical to, facts about conscious experience’. I would then add that we have insufficient evidence to be empirically certain of that entailment or identity [edit: but it would be very weird if the entailment didn’t hold, and I have no particular reason to believe that it doesn’t.] When you say the zombie argument isn’t correct, are you disagreeing with me on conceivability, or the ‘therefore’, or the empirical part—or do you have a different argument in mind?
On standard physicalism zombies would be conceivable because physics only captures the functional/relational properties between things, but this misses the intrinsic properties underlying these relations which are phenomenal.
On Russelian Monism, zombies are not conceivable because if you duplicate the physics you’re also duplicating the intrinsic, categorical properties and these are phenomenal (or necessarily give rise to phenomena.)
I could also imagine other flavours of Monism (which might be better labelled as property dualism?) for which the intrinsic categorical properties are contingent rather than necessary. On this view, zombies would also be conceivable.
I would tentatively lean towards regular Russellian Monism (I.e. zombies are inconceivable which is what I crudely meant by saying the zombie argument isn’t correct.)
On standard physicalism zombies would be conceivable
On standard (non eliminative) physicalism, zombies cannot be conceived without contradiction , because physicalism holds that consciousness is entirely physical, and a physical duplicate is a duplicate simpliciter. So the inconceivability of zombies is not a USP of RM.
The bite of the zombie argument is that zombies are conceivable given physics … physics does not predict phenomenal properties, so absence of phenomenal properties does not lead to contradiction.
I could also imagine other flavours of Monism (which might be better labelled as property dualism?) for which the intrinsic categorical properties are contingent rather than necessary
Phenomenal “properties” are more necessary under DAT than RM, because they are not even separate properties, but arise from the subjective perspective on the underlying reality.
On standard (non eliminative) physicalism, zombies cannot be conceived without contradiction , because physicalism holds that consciousness is entirely physical, and a physical duplicate is a duplicate simpliciter.
This isn’t correct. The standard non-eliminative (type B) physicalist stance is to grant that zombies are conceivable a priori but deny the move to metaphysical possibility a posteriori. They’d say that physical brain states are identical to phenomena but we only find this a posteriori (analogous to water = H20 or heat = molecular motion.) You might find this view unsatisfying (as I do) but there are plenty of philosophers who take the line (Loar, Papineau, Tye etc..) and it’s not contradictory.
The physicalist move to deny zombie conceivability is eliminativist (type A) and is taken by e.g. Dennett, Dretske, Lewis etc..
I agree with you overall (and voted accordingly) but I think this part is a red herring:
You don’t need to go anything like as far as p-zombies to get something that says the same thing. A program consisting of print(“I know that I’m not a zombie since I have consciousness”) etc does the same thing.
It only “says the same thing” in one narrow case; to say all of the same things in the appropriate contexts, the program would need to be tremendously complex.
I mention this because I think you’re clearly correct overall (while of course the words “believe” and “mind” could be defined in ways that do not require consciousness, those are not the relevant senses here), and it would be a pity if the conversation were derailed by that one (IMO) irrelevant example.
I know that I’m not a zombie since I have consciousness, which in my case at this moment includes consciousness of knowing that I have consciousness. Anything that can be conscious of knowing that they are conscious is not a zombie by definition.
A zombie would not know that it is conscious. It would also not know that it is not conscious. It would not even know that it is uncertain about being conscious or not. Knowing is a property of a mind, and by definition of “zombie” there is no mind there to know with.
Naturally a p-zombie behaves as if it can know things to all levels of external inspection, though whether p-zombies can exist is an unsettled question.
Yes, but this is exactly what a zombie would say. Sure, in your case you presumably have direct access to your conscious experience that a zombie doesn’t have, but the rhetorical point I’m making in the post is that a zombie would believe it has phenomenal consciousness with the same conviction you have and when asked to justify it’s conviction it would point to the same things you do.
You didn’t (rhetorically) ask what a zombie would say. You asked how do I know. That fact that something else might say the same thing as I did is not at all disconcerting to me, and I’m not sure why it’s disconcerting to you.
You don’t need to go anything like as far as p-zombies to get something that says the same thing. A program consisting of print(“I know that I’m not a zombie since I have consciousness”) etc does the same thing.
The rhetorical point you’re making is simply false. A p-zombie would not believe anything, since it has no mind with which to believe. It would merely act in the same manner as something that has a mind.
Look, I appreciate the pushback, but I think you’re pressing a point which is somewhat tangential and not load-bearing for my position.
I agree that zombies have no mental states so, by definition, they can’t “believe” anything.
The point is, when you say “I know I’m conscious” you think you’re appealing to your direct phenomenal experience. Fine. But the zombie produces the exact same utterance, not by appealing to its phenomenal experience but through a purely physical/functional process which is a duplicate process to the one running in your brain. In this case, the thing which is doing the causal work to produce the utterances must be the physical/functional profile of their brain, not the phenomena itself.
So if the zombie argument is correct, you think you’re appealing to the phenomenal aspect of consciousness to determine the truth of your consciousness but you’re actually using the physical/functional profile of your brain. Hence my rhetorical point at the start of the article; if the zombie argument is correct then how do you know you’re not a zombie? The solution is that the zombie argument isn’t correct.
In the article, I also propose Russelian monism which takes the phenomenal aspect of consciousness seriously. In this way, you’d know the truth of your consciousness by introspecting because you’d have direct access to it. So again, the point you’re pressing is actually correct—you would indeed know that you’re not a zombie because you have access to your phenomenal consciousness.
No it doesn’t. The functional/physical profile of a print statement isn’t similar to the human brain. I’m also not sure why this point is relevant.
No problem, if it’s tangential then we can agree that it’s tangential. We also appear to agree that the zombie argument may not be a useful way of thinking about things, as referred to in the last sentence of my first comment.
I agree that if a p-zombie of me can exist, then my consciousness would not be the sole cause for things with my pattern of matter saying that they are conscious. It may still be a cause, in that my consciousness may be causally upstream of having a pattern of matter that emits such utterances, but there may be other ways that such patterns of matter can come to exist as well.
Could you clarify what it means for “the zombie argument” to be correct/incorrect? The version I have in mind (and agree with) is, roughly, ‘p-zombies are conceivable; therefore, we can’t know a priori that facts about the physical world entail, or are identical to, facts about conscious experience’. I would then add that we have insufficient evidence to be empirically certain of that entailment or identity [edit: but it would be very weird if the entailment didn’t hold, and I have no particular reason to believe that it doesn’t.] When you say the zombie argument isn’t correct, are you disagreeing with me on conceivability, or the ‘therefore’, or the empirical part—or do you have a different argument in mind?
On standard physicalism zombies would be conceivable because physics only captures the functional/relational properties between things, but this misses the intrinsic properties underlying these relations which are phenomenal.
On Russelian Monism, zombies are not conceivable because if you duplicate the physics you’re also duplicating the intrinsic, categorical properties and these are phenomenal (or necessarily give rise to phenomena.)
I could also imagine other flavours of Monism (which might be better labelled as property dualism?) for which the intrinsic categorical properties are contingent rather than necessary. On this view, zombies would also be conceivable.
I would tentatively lean towards regular Russellian Monism (I.e. zombies are inconceivable which is what I crudely meant by saying the zombie argument isn’t correct.)
On standard (non eliminative) physicalism, zombies cannot be conceived without contradiction , because physicalism holds that consciousness is entirely physical, and a physical duplicate is a duplicate simpliciter. So the inconceivability of zombies is not a USP of RM.
The bite of the zombie argument is that zombies are conceivable given physics … physics does not predict phenomenal properties, so absence of phenomenal properties does not lead to contradiction.
Phenomenal “properties” are more necessary under DAT than RM, because they are not even separate properties, but arise from the subjective perspective on the underlying reality.
This isn’t correct. The standard non-eliminative (type B) physicalist stance is to grant that zombies are conceivable a priori but deny the move to metaphysical possibility a posteriori. They’d say that physical brain states are identical to phenomena but we only find this a posteriori (analogous to water = H20 or heat = molecular motion.) You might find this view unsatisfying (as I do) but there are plenty of philosophers who take the line (Loar, Papineau, Tye etc..) and it’s not contradictory.
The physicalist move to deny zombie conceivability is eliminativist (type A) and is taken by e.g. Dennett, Dretske, Lewis etc..
I agree with you overall (and voted accordingly) but I think this part is a red herring:
It only “says the same thing” in one narrow case; to say all of the same things in the appropriate contexts, the program would need to be tremendously complex.
I mention this because I think you’re clearly correct overall (while of course the words “believe” and “mind” could be defined in ways that do not require consciousness, those are not the relevant senses here), and it would be a pity if the conversation were derailed by that one (IMO) irrelevant example.