I think a missing critical ingredient to evaluating this is why simulating the brain would cause consciousness. Realizing why it must makes functionalism far more sensical as a conclusion. Otherwise it’s just “I guess it probably would work”:
Suzie Describes Her Experience
Suzie is a human known to have phenomenal experiences.
Suzie makes a statement about what it’s like to have one of those experiences—”It’s hard to describe what it feels like to think. It feels kinda like....the thoughts appear, almost fully formed...”
Suzie’s actual experiences must have a causal effect on her behavior because: when we discuss our experience, it always feels like we’re talking about our experience. If the actual experience wasn’t having any effect on what we said, then it would have to be perpetual coincidence that our words lined up with our experience. Perpetual coincidence is impossible.
Replacing Neurons Generally
We know that regardless of whatever low level details cause a neuron to fire, it ultimately resolves into a binary conclusion—fire or do not fire
Every outward behavior we perform is based on this same causal chain. We have sensory inputs, this causes neurons to fire, some of those cause others to fire, eventually some of those are motor neurons and they cause vocal cords to speak or fingers to type.
If you replace a neuron with a functional equivalent, whether hardware or software, assuming that it fires at the same speed and strength as the original neuron, and given the same input it will either fire or not fire as the original would have—then the behavior would be exactly the same as the original. It’s not a guess this is true, it’s a fact of physics. This is true whether they were hardware or software equivalents.
Replacing Specifically Suzie’s Neurons Specifically While She Describes Her Experience
We have already established that Suzie’s experience must have a causal effect on honest self-report of experience.
And we also established that all causal effects of behaviour resolve at the action potential scale and abstraction level. For instance, if quantum weirdness happens on a smaller scale—it only has an effect on behaviour if it somehow determined whether or not a neuron fired. Our hardware or software equivalents would be made to account for that.
Not to mention there’s not really good reason to suppose that tiny quantum effects are orchestrating large scale neuronal pattern alterations. I’m not sure the quantum consciousness people are even arguing this. I think there focus is more on attempting to find consciousness in the quantum realm, than to say that quantum effects are able to drastically alter firing patterns
So if Suzie’s experience has a causal effect, and the entire causal chain is in neuron action potential propagation, then that must mean that experience is somehow contained in the patterns of this action potential propagation, and it is independent of the substrate, so it is something about what these components are doing, rather than the components themselves
I think a missing critical ingredient to evaluating this is why simulating the brain would cause consciousness. Realizing why it must makes functionalism far more sensical as a conclusion. Otherwise it’s just “I guess it probably would work”:
Suzie Describes Her Experience
Suzie is a human known to have phenomenal experiences.
Suzie makes a statement about what it’s like to have one of those experiences—”It’s hard to describe what it feels like to think. It feels kinda like....the thoughts appear, almost fully formed...”
Suzie’s actual experiences must have a causal effect on her behavior because: when we discuss our experience, it always feels like we’re talking about our experience. If the actual experience wasn’t having any effect on what we said, then it would have to be perpetual coincidence that our words lined up with our experience. Perpetual coincidence is impossible.
Replacing Neurons Generally
We know that regardless of whatever low level details cause a neuron to fire, it ultimately resolves into a binary conclusion—fire or do not fire
Every outward behavior we perform is based on this same causal chain. We have sensory inputs, this causes neurons to fire, some of those cause others to fire, eventually some of those are motor neurons and they cause vocal cords to speak or fingers to type.
If you replace a neuron with a functional equivalent, whether hardware or software, assuming that it fires at the same speed and strength as the original neuron, and given the same input it will either fire or not fire as the original would have—then the behavior would be exactly the same as the original. It’s not a guess this is true, it’s a fact of physics. This is true whether they were hardware or software equivalents.
Replacing Specifically Suzie’s Neurons Specifically While She Describes Her Experience
We have already established that Suzie’s experience must have a causal effect on honest self-report of experience.
And we also established that all causal effects of behaviour resolve at the action potential scale and abstraction level. For instance, if quantum weirdness happens on a smaller scale—it only has an effect on behaviour if it somehow determined whether or not a neuron fired. Our hardware or software equivalents would be made to account for that.
Not to mention there’s not really good reason to suppose that tiny quantum effects are orchestrating large scale neuronal pattern alterations. I’m not sure the quantum consciousness people are even arguing this. I think there focus is more on attempting to find consciousness in the quantum realm, than to say that quantum effects are able to drastically alter firing patterns
So if Suzie’s experience has a causal effect, and the entire causal chain is in neuron action potential propagation, then that must mean that experience is somehow contained in the patterns of this action potential propagation, and it is independent of the substrate, so it is something about what these components are doing, rather than the components themselves