If substrate independence is true, we have no problem saying that Sim#1 Mary was conscious, and that everyone else is conscious in both Sim#1 and Sim#2. But, if we say that Sim#2 Mary is not conscious… then we have to grapple with the fact that she is a P-zombie.[6]
She is not exactly a p zombie. The Mary in sim #1 is not a p-zombie version of the original Mary, because she is only a functional duplicate, not a physical duplicate; and the Mary in Sim #2 is only a behavioural duplicate. So the question of “what difference explains the loss of consciousness” is easily answered—all three are differerent.
And I don’t need to reinvent the wheel here, so I’ll just claim that belief in P-zombies is incoherent, and we don’t really have a good reason to say that she isn’t conscious. So Sim#2 Mary, a mere recording of Mary, must be cons– wait, what?and
P zombies aren’t incoherent , they just contradict physicalism. And you are talking about c zombies, anyway.
Physicalism has it that an exact atom-by-atom duplicate of a person will be a person and not a zombie, because there is no nonphysical element to go missing. That’s the argument against p-zombies. But if actually takes an atom-by-atom duplication to achieve human functioning, then the computational theory of mind will be false, because CTM implies that the same algorithm running on different hardware, will be sufficient. Physicalism doesn’t imply computationalism, and arguments against p-zombies don’t imply the non existence of c-zombies—unconscious duplicates that are identical computationally, but not physically.
I think this is folly. I think we’re engaging in a category error if we’re thinking of things this way — we’re not fully grappling with the consequences of substrate independence. Are the people in Sim#1 and Sim#2 conscious twice, like some kind of deja-vu they can’t experience? I really don’t think so.
There’s no strong reason to think they are conscious once.
We say X is conscious if and only if there is such a thing as ⟨what it’s like to be X⟩. If when we run the automaton, we have reason to think that there is such a thing as what it’s like to be the simulated brain, but we also conclude that it shouldn’t matter whether or not you run the automaton
Something gets lost at each stage. Going from a physical embodiment to a computational simulation loses the physics; going from a computational simulation to a behavioural simulation loses the counterfactual possibilities of the computational simulation; going from a behavioural simulation that actually runs to a notional one loses actual occurrence. Any of those losses could affect consciousness.
I’ve come to a nearly delusional form of belief (it’s not like I’m exactly convinced), that isn’t even fully articulated here; I’ve come to really think that this whole thing is quite bogus, that there really is no difference between realism and solipsism and nihilism and a strange kind of theism
That should be taken as a reductio as absurdum of the GAZP
But you see the importance of the question, “How far can you generalize the Anti-Zombie Argument and have it still be valid?”
Clearly , the answer isn’t “indefinitely” .
@JBlack
The problem with Dust theory is that it assumes that conscious states supervene on brain states instantaneously. There is no evidence for that. We should not be fooled by the “specious present”. We seem to be conscious moment-by-moment, but the “moments” in question are rather coarse-grained, corresponding to the specious present of 0.025-0.25 second or so. It’s quite compatible with the phenomenology that it requires thousands or millions of neural events or processing steps to achieve a subjective “instant” of consciousness. Which would mean you can’t salami-slice someone’s stream-of-consciousness too much without it vanishing: and also mean that spontaneously occurring Boltzman states are conscious; and also preserves the intuition that computation is a process
—that a computational state is defined as being a stage of a computation.
She is not exactly a p zombie. The Mary in sim #1 is not a p-zombie version of the original Mary, because she is only a functional duplicate, not a physical duplicate; and the Mary in Sim #2 is only a behavioural duplicate.
My explanation was a bit confusing, sorry about that! I wasn’t intending for there to be an “original” Mary; she and everyone else only ever existed as a simulation. If we were to assume substrate independence, we’d be fine with saying that the denizens of Sim#1 are conscious. And while Sim#2 Mary is not a P-zombie to the alien, she very much is one to the people in Sim#2.
I guess you’re correct that the right terminology would be that she’s C-zombie, but the people in the simulation can’t know that. And since we can’t know for sure whether we ourselves are “really” physical, for all intents and purposes we can’t be sure that there is a distinction between P- and C- zombies. Regardless; I was assuming substrate independence, not physicalism.
An aside on non-substrate-independent physicalism
Personally, I find physicalist theories of consciousness that don’t include substrate independence quite silly, but that’s a matter of taste, not a refutation.
My vague gesturing at an argument would be something like this: a brain in a vat is halfway between a physical person and a simulation of one, and there don’t seem to be any particular properties about it to say that it shouldn’t be conscious — and, crucially, such properties don’t seem to appear the more we shift the slider towards “computer-brain,” first by replacing each neuron with a chip, and then by replacing networks of chips with bigger chips running a network, and so on until the whole thing is a chip. Is it really the case that we’re losing the physics?
Regardless, this is a separate (though related) conversation. My piece was about apparent implications of substrate independence, not physicalism. In fact, I happen to think that part of the reason why a lot of physicalists have the tendency to speak of “actual persons made of actual atoms” is because they (subconsciously?) recognize that unintuitive conclusions like this can easily crop up, and they find them, well — absurd, as you put it.
Clearly , the answer isn’t “indefinitely”
Is it that clear? A reductio ad absurdum is really just a statement about the extent of one’s philosophical Overton Window — it’s not a proof by contradiction.
An aside on the absurd
We used to think that the prediction of dark stars meant that Newton’s fact of gravity broke down when it came to light (which was true); but then again, we thought the same about the prediction of black holes and General Relativity. Nowadays, most physicists (probably) don’t believe that white holes exist, despite the fact that they’re just as predicted by GR as black holes, because they find the prospect absurd (in the absence of evidence).
Of course there’s major differences here. Everything I’m saying about subjective experience is on its face unfalsifiable — but, at least currently, so is me stating that I’m conscious (to you, at least).
I’m still hoping for a smart person to come up with some mathematical reason for why what I’m saying makes sense; the unsung hero that Claude Shannon anticipated:
I think [...] that one is now, perhaps for the first time, ready for a real theory of meaning.
My explanation was a bit confusing, sorry about that! I wasn’t intending for there to be an “original” Mary; she and everyone else only ever existed as a simulation. If we were to assume substrate independence, we’d be fine with saying that the denizens of Sim#1 are conscious.
I think the assumption the argument works from is that the Consciousness Is Computation. The substrate independence of computation , which I don’t doubt, doesn’t prove anything about consciousness without that.
And while Sim#2 Mary is not a P-zombie to the alien, she very much is one to the people in Sim#2.
I guess you’re correct that the right terminology would be that she’s C-zombie, but the people in the simulation can’t know that.
And since we can’t know for sure whether we ourselves are “really” physical, for all intents and purposes we can’t be sure that there is a distinction between P- and C- zombies.
It’s about explanation. Dualism has more resources to explain consciousness than physicalism, which has more resources than computationalism, etc. That doesn’t mean you should jump straight to the richest ontology , because that would be against Occam’s Razor. What should you do? No one knows! But there is no fact that you can explain consciousness with algorithms alone.
Personally, I find physicalist theories of consciousness that don’t include substrate independence quite silly, but that’s a matter of taste, not a refutation.
Computationalism is a particular form of multiple realisability. Physicalism doesn’t exclude it, or necessitate it. Other forms of multiple realisability are available.
My vague gesturing at an argument would be something like this: a brain in a vat is halfway between a physical person and a simulation of one
Err..why? A physical brain that happens to be in a vat is a physical brain, surely?
first by replacing each neuron with a chip, and then by replacing networks of chips with bigger chips running a network, and so on until the whole thing is a chip. Is it really the case that we’re losing the physics?
You are losing the specific physics. Computational substrate independence is a special case of substrate independence , but substrate independence in no case implies immateriality.
ETA
We used to think that the prediction of dark stars meant that Newton’s fact of gravity broke down when it came to light (which was true); but then again, we thought the same about the prediction of black holes and General Relativity. Nowadays, most physicists (probably) don’t believe that white holes exist, despite the fact that they’re just as predicted by GR as black holes, because they find the prospect absurd (in the absence of evidence).
You can be forced into a belief in counterintuitive conclusions by strong evidence or arguments … and you should only believe it on the basis of strong evidence and arguments. The rule is not “never believe in counter intuitive conclusions” .
She is not exactly a p zombie. The Mary in sim #1 is not a p-zombie version of the original Mary, because she is only a functional duplicate, not a physical duplicate; and the Mary in Sim #2 is only a behavioural duplicate. So the question of “what difference explains the loss of consciousness” is easily answered—all three are differerent.
P zombies aren’t incoherent , they just contradict physicalism. And you are talking about c zombies, anyway.
Physicalism has it that an exact atom-by-atom duplicate of a person will be a person and not a zombie, because there is no nonphysical element to go missing. That’s the argument against p-zombies. But if actually takes an atom-by-atom duplication to achieve human functioning, then the computational theory of mind will be false, because CTM implies that the same algorithm running on different hardware, will be sufficient. Physicalism doesn’t imply computationalism, and arguments against p-zombies don’t imply the non existence of c-zombies—unconscious duplicates that are identical computationally, but not physically.
There’s no strong reason to think they are conscious once.
Something gets lost at each stage. Going from a physical embodiment to a computational simulation loses the physics; going from a computational simulation to a behavioural simulation loses the counterfactual possibilities of the computational simulation; going from a behavioural simulation that actually runs to a notional one loses actual occurrence. Any of those losses could affect consciousness.
That should be taken as a reductio as absurdum of the GAZP
Clearly , the answer isn’t “indefinitely” .
@JBlack The problem with Dust theory is that it assumes that conscious states supervene on brain states instantaneously. There is no evidence for that. We should not be fooled by the “specious present”. We seem to be conscious moment-by-moment, but the “moments” in question are rather coarse-grained, corresponding to the specious present of 0.025-0.25 second or so. It’s quite compatible with the phenomenology that it requires thousands or millions of neural events or processing steps to achieve a subjective “instant” of consciousness. Which would mean you can’t salami-slice someone’s stream-of-consciousness too much without it vanishing: and also mean that spontaneously occurring Boltzman states are conscious; and also preserves the intuition that computation is a process —that a computational state is defined as being a stage of a computation.
Thanks for leaving a comment!
My explanation was a bit confusing, sorry about that! I wasn’t intending for there to be an “original” Mary; she and everyone else only ever existed as a simulation. If we were to assume substrate independence, we’d be fine with saying that the denizens of Sim#1 are conscious. And while Sim#2 Mary is not a P-zombie to the alien, she very much is one to the people in Sim#2.
I guess you’re correct that the right terminology would be that she’s C-zombie, but the people in the simulation can’t know that. And since we can’t know for sure whether we ourselves are “really” physical, for all intents and purposes we can’t be sure that there is a distinction between P- and C- zombies. Regardless; I was assuming substrate independence, not physicalism.
An aside on non-substrate-independent physicalism
Personally, I find physicalist theories of consciousness that don’t include substrate independence quite silly, but that’s a matter of taste, not a refutation.
My vague gesturing at an argument would be something like this: a brain in a vat is halfway between a physical person and a simulation of one, and there don’t seem to be any particular properties about it to say that it shouldn’t be conscious — and, crucially, such properties don’t seem to appear the more we shift the slider towards “computer-brain,” first by replacing each neuron with a chip, and then by replacing networks of chips with bigger chips running a network, and so on until the whole thing is a chip. Is it really the case that we’re losing the physics?
Regardless, this is a separate (though related) conversation. My piece was about apparent implications of substrate independence, not physicalism. In fact, I happen to think that part of the reason why a lot of physicalists have the tendency to speak of “actual persons made of actual atoms” is because they (subconsciously?) recognize that unintuitive conclusions like this can easily crop up, and they find them, well — absurd, as you put it.
Is it that clear? A reductio ad absurdum is really just a statement about the extent of one’s philosophical Overton Window — it’s not a proof by contradiction.
An aside on the absurd
We used to think that the prediction of dark stars meant that Newton’s fact of gravity broke down when it came to light (which was true); but then again, we thought the same about the prediction of black holes and General Relativity. Nowadays, most physicists (probably) don’t believe that white holes exist, despite the fact that they’re just as predicted by GR as black holes, because they find the prospect absurd (in the absence of evidence).
Of course there’s major differences here. Everything I’m saying about subjective experience is on its face unfalsifiable — but, at least currently, so is me stating that I’m conscious (to you, at least).
I’m still hoping for a smart person to come up with some mathematical reason for why what I’m saying makes sense; the unsung hero that Claude Shannon anticipated:
~ Shannon (1949) The Mathematical Theory of Communication, p. 116.
I think the assumption the argument works from is that the Consciousness Is Computation. The substrate independence of computation , which I don’t doubt, doesn’t prove anything about consciousness without that.
And while Sim#2 Mary is not a P-zombie to the alien, she very much is one to the people in Sim#2.
It’s about explanation. Dualism has more resources to explain consciousness than physicalism, which has more resources than computationalism, etc. That doesn’t mean you should jump straight to the richest ontology , because that would be against Occam’s Razor. What should you do? No one knows! But there is no fact that you can explain consciousness with algorithms alone.
Computationalism is a particular form of multiple realisability. Physicalism doesn’t exclude it, or necessitate it. Other forms of multiple realisability are available.
Err..why? A physical brain that happens to be in a vat is a physical brain, surely?
You are losing the specific physics. Computational substrate independence is a special case of substrate independence , but substrate independence in no case implies immateriality.
ETA
You can be forced into a belief in counterintuitive conclusions by strong evidence or arguments … and you should only believe it on the basis of strong evidence and arguments. The rule is not “never believe in counter intuitive conclusions” .