I take your point around substrate independence being a conclusion of computationalism rather than independent evidence for it—this is a fair criticism.
If I’m interpreting your argument correctly, there are two possibilities: 1. Biological structures happen to implement some function which produces consciousness [Functionalism] 2. Biological structures have some physical property X which produces consciousness. [Biological Essentialism or non-Computationalist Physicalism]
Your argument seems to be that 2) has more explanatory power because it has access to all of the potential physical processes underlying biology to try to explain consciousness whereas 1) is restricted to the functions that the biological systems implement. Have I captured the argument correctly? (please let me know if I haven’t)
Imagine that we could successfully implement a functional isomorph of the human brain in silicon. A proponent of 2) would need to explain why this functional isomorph of the human brain which has all the same functional properties as an actual brain does not, in fact, have consciousness. A proponent of 1) could simply assert that it does. It’s not clear to me what property X the biological brain has which induces consciousness which couldn’t be captured by a functional isomorph in silicon. I know there’s been some recent work by Anil Seth where he tries to pin down the properties X which biological systems may require for consciousness https://osf.io/preprints/psyarxiv/tz6an. His argument suggests that extremely complex biological systems may implement functions which are non-Turing computable. However, while he identifies biological properties that would be difficult to implement in silicon, I didn’t find this sufficient evidence for the claim that brains perform non-Turing computable functions.. Do you have any ideas?
I’ll admit that modern day LLM’s are nowhere near functional isomorphs of the human brain so it could be that there’s some “functional gap” between their implementation and the human brain. So it could indeed be that LLM’s are not consciousness because they are missing some “important function” required for consciousness. My contention in this post is that if they’re able to reason about their internal experience and qualia in a sophisticated manner then this is at least circumstantial evidence that they’re not missing the “important function.”
Imagine that we could successfully implement a functional isomorph of the human brain in silicon. A proponent of 2) would need to explain why this functional isomorph of the human brain which has all the same functional properties as an actual brain does not, in fact, have consciousness.
Physicalism can do that easily,.because it implies that there can be something special about running running unsimulated , on bare metal.
Computationalism, even very fine grained computationalism, isn’t a direct consequence of physicalism.
Physicalism has it that an exact atom-by-atom duplicate of a person will be a person and not a zombie, because there is no nonphysical element to go missing. That’s the argument against p-zombies. But if actually takes an atom-by-atom duplication to achieve human functioning, then the computational theory of mind will be false, because CTM implies that the same algorithm running on different hardware will be sufficient. Physicalism doesn’t imply computationalism, and arguments against p-zombies don’t imply the non existence of c-zombies—unconscious duplicates that are identical computationally, but not physically.
So it is possible,given physicalism , for qualia to depend on the real physics , the physical level of granularity, not on the higher level of granularity that is computation.
Anil Seth where he tries to pin down the properties X which biological systems may require for consciousness https://osf.io/preprints/psyarxiv/tz6an. His argument suggests that extremely complex biological systems may implement functions which are non-Turing computable
It presupposes computationalism to assume that the only possible defeater for a computational theory is the wrong kind of computation.
My contention in this post is that if they’re able to reason about their internal experience and qualia in a sophisticated manner then this is at least circumstantial evidence that they’re not missing the “important function.”
There’s no evidence that they are not stochastic-parrotting , since their training data wasn’t pruned of statements about consciousness.
If the claim of consciouness is based on LLMs introspecting their own qualia and report on them , there’s no clinching evidence they are doing so at all. You’ve got the fact computational functionalism isn’t necessarily true, the fact that TT type investigations don’t pin down function, and the fact that there is another potential explanation diverge results.
Ok, I think I can see where we’re diverging a little clearer now. The non-computational physicalist position seem to postulate that consciousness requires a physical property X and the presence or absence of this physical property is what determines consciousness—i.e. it’s what the system is that is important for consciousness, not what the system does.
That’s the argument against p-zombies. But if actually takes an atom-by-atom duplication to achieve human functioning, then the computational theory of mind will be false, because CTM implies that the same algorithm running on different hardware will be sufficient.
I don’t find this position compelling for several reasons:
First, if consciousness really required extremely precise physical conditions—so precise that we’d need atom-by-atom level duplication to preserve it, we’d expect it to be very fragile. Yet consciousness is actually remarkably robust: it persists through significant brain damage, chemical alterations (drugs and hallucinogens) and even as neurons die and are replaced. We also see consciousness in different species with very different neural architectures. Given this robustness, it seems more natural to assume that consciousness is about maintaining what the state is doing (implementing feedback loops, self-models, integrating information etc.) rather than their exact physical state.
Second, consider what happens during sleep or under anaesthesia. The physical properties of our brains remain largely unchanged, yet consciousness is dramatically altered or absent. Immediately after death (before decay sets in), most physical properties of the brain are still present, yet consciousness is gone. This suggests consciousness tracks what the brain is doing (its functions) rather than what it physically is. The physical structure has not changed but the functional patterns have changed or ceased.
I acknowledge that functionalism struggles with the hard problem of consciousness—it’s difficult to explain how subjective experience could emerge from abstract computational processes. However, non-computationalist physicalism faces exactly the same challenge. Simply identifying a physical property common to all conscious systems doesn’t explain why that property gives rise to subjective experience.
“It presupposes computationalism to assume that the only possible defeater for a computational theory is the wrong kind of computation.”
This is fair. It’s possible that some physical properties would prevent the implementation of a functional isomorph. As Anil Seth identifies, there are complex biological processes like nitrous oxide diffusing across cell membranes which would be specifically difficult to implement in artificial systems and might be important to perform the functions required for consciousness (on functionalism).
“There’s no evidence that they are not stochastic-parrotting, since their training data wasn’t pruned of statements about consciousness. If the claim of consciousness is based on LLMs introspecting their own qualia and report on them, there’s no clinching evidence they are doing so at all.”
I agree. The ACT (AI Consciousness Test) specifically requires AI to be “boxed-in” in pre-training to offer a conclusive test of consciousness. Modern LLMs are not boxed-in in this way (I mentioned this in the post). The post is merely meant to argue something like “If you accept functionalism, you should take the possibility of conscious LLMs more seriously”.
I don’t find this position compelling for several reasons:
First, if consciousness really required extremely precise physical conditions—so precise that we’d need atom-by-atom level duplication to preserve it, we’d expect it to be very fragile.
Don’t assume that then. Minimally, non computation physicalism only requires that the physical substrate makes some sort of difference. Maybe approximate physical resemblance results in approximate qualia.
Yet consciousness is actually remarkably robust: it persists through significant brain damage, chemical alterations (drugs and hallucinogens) and even as neurons die and are replaced.
You seem to be assuming a maximally coarse-grained either-conscious-or-not model.
If you allow for fine grained differences in functioning and behaviour , all those things produce fine grained differences. There would be no point in administering anaesthesia if it made no difference to consciousness. Likewise ,there would be no point in repairing brain injuries. Are you thinking of consciousness as a synonym for personhood?
We also see consciousness in different species with very different neural architectures.
We don’t see that they have the same kind of level of consciousness.
Given this robustness, it seems more natural to assume that consciousness is about maintaining what the state is doing (implementing feedback loops, self-models, integrating information etc.) rather than their exact physical state.
Stability is nothing like a sufficient explabation of consciousness, particularly the hard problem of conscious experience...even if it is necessary.But it isn’t necessary either , as the cycle of sleep and waking tells all of us every day.
Second, consider what happens during sleep or under anaesthesia. The physical properties of our brains remain largely unchanged, yet consciousness is dramatically altered or absent.
Obviously the electrical and chemical activity changes. You are narrowing “physical” to “connectome”. Physcalism is compatible with the idea that specific kinds of physical.acriviry are crucial.
Immediately after death (before decay sets in), most physical properties of the brain are still present, yet consciousness is gone. This suggests consciousness tracks what the brain is doing (its functions)
No, physical behaviour isn’t function. Function is abstract, physical behaviour is concrete. Flight simulators functionally duplicate flight without flying. If function were not abstract, functionalism would not lead to substrate independence. You can build a model of ion channels and synaptic clefts, but the modelled sodium ions aren’t actual sodium ion, and if the universe cares about activity being implemented by actual sodium ions, your model isn’t going to be conscious
Rather than what it physically is. The physical structure has not changed but the functional patterns have changed or ceased.
Physical activity is physical.
I acknowledge that functionalism struggles with the hard problem of consciousness—it’s difficult to explain how subjective experience could emerge from abstract computational processes. However, non-computationalist physicalism faces exactly the same challenge. Simply identifying a physical property common to all conscious systems doesn’t explain why that property gives rise to subjective experience.
I never said it did. I said it had more resources. It’s badly off, but not as badly off.
Yet, we generally accept behavioural evidence (including sophisticated reasoning about consciousness) as evidence of consciousness in humans.
If we can see that someone is a human, we know that they gave a high degree of biological similarity. So webl have behavioural similarity, and biological similarity, and it’s not obvious how much lifting each is doing.
Functionalism doesn’t require giving up on qualia, but only acknowledging physics. If neuron firing behavior is preserved, the exact same outcome is preserved,
Well, the externally visible outcome is.
If I say “It’s difficult to describe what it feels like to taste wine, or even what it feels like to read the label, but it’s definitely like something”—There are two options—either -it’s perpetual coincidence that my experience of attempting to translate the feeling of qualia into words always aligns with words that actually come out of my mouth or it is not Since perpetual coincidence is statistically impossible, then we know that experience had some type of causal effect.
In humans.
So far that tells us that epiphenomenalism is wrong, not that functionalism is right.
The binary conclusion of whether a neuron fires or not encapsulates any lower level details, from the quantum scale to the micro-biological scale
What does “encapsulates”means? Are you saying that fine grained information gets lost? Note that the basic fact of running on the metal is not lost.
—this means that the causal effect experience has is somehow contained in the actual firing patterns.
Yes. That doesn’t mean the experience is, because a computational Zombie will produce the same outputs even if it lacks consciousness, uncoincidentally.
A computational duplicate of a believer in consciousness and qualia will continue to state that it has them , whether it does or not, because its a computational duplicate , so it produces the same output in response to the same input
We have already eliminated the possibility of happenstance or some parallel non-causal experience,
You haven’t eliminated the possibility of a functional duplicate still being a functional duplicate if it lacks conscious experience.
Well, the externally visible outcome is [preserved]
Yes, I’m specifically focused on the behaviour of an honest self-report
What does “encapsulates”means? Are you saying that fine grained information gets lost? Note that the basic fact of running on the metal is not lost.
fine-grained information becomes irrelevant implementation details. If the neuron still fires, or doesn’t, smaller noise doesn’t matter. The only reason I point this out is specifically as it applies to the behaviour of a self-report (which we will circle back to in a moment). If it doesn’t materially effect the output so powerfully that it alters that final outcome, then it is not responsible for outward behaviour.
Yes. That doesn’t mean the experience is, because a computational Zombie will produce the same outputs even if it lacks consciousness, uncoincidentally.
A computational duplicate of a believer in consciousness and qualia will continue to state that it has them , whether it does or not, because its a computational duplicate , so it produces the same output in response to the same input
You haven’t eliminated the possibility of a functional duplicate still being a functional duplicate if it lacks conscious experience.
I’m saying that we have ruled out that a functional duplicate could lack conscious experience because:
we have established conscious experience as part of the causal chain to be able to feel something and then output a description through voice or typing that is based on that feeling. If conscious experience was part of that causal chain, and the causal chain consists purely of neuron firings, then conscious experience is contained in that functionality.
We can’t invoke the idea that smaller details (than neuron firings) are where consciousness manifests, because unless those smaller details affect neuronal firing patterns enough to cause the subject to speak about what it feels like to be sentient, then they are not part of that causal chain, which sentience must be a part of.
I think we might actually be agreeing (or ~90% overlapping) and just using different terminology.
Physical activity is physical.
Right. We’re talking about “physical processes” rather than static physical properties. I.e. Which processes are important for consciousness to be implemented and can the physics support these processes?
No, physical behaviour isn’t function. Function is abstract, physical behaviour is concrete. Flight simulators functionally duplicate flight without flying. If function were not abstract, functionalism would not lead to substrate independence. You can build a model of ion channels and synaptic clefts, but the modelled sodium ions aren’t actual sodium ion, and if the universe cares about activity being implemented by actual sodium ions, your model isn’t going to be conscious
The flight simulator doesn’t implement actual aerodynamics (it’s not implementing the required functions to generate lift) but this isn’t what we’re arguing. A better analogy might be to compare a birds wing to a steel aeroplane wing, both implement the actual physical process required for flight (generating lift through certain airflow patterns) just with different materials.
Similarly, a wooden rod can burn in fire whereas a steel rod can’t. This is because the physics of the material are preventing a certain function (oxidation) from being implemented.
So when we’re imagining a functional isomorph of the brain which has been built using silicon this presupposes that silicon can actually replicate all of the required functions with its specific physics. As you’ve pointed out, this is a big if! There are physical processes (such as Nitrous Oxide diffusion across the cell membranes) which might be impossible to implement in silicon and fundamentally important for consciousness.
I don’t disagree! The point is that the functions which this physical process is implementing are what’s required for consciousness not the actual physical properties themselves.
I think I’m more optimistic than you that a moderately accurate functional isomorph of the brain could be built which preserves consciousness (largely due to the reasons I mentioned in my previous comment around robustness.) But putting this aside for a second, would you agree that if all the relevant functions could be implemented in silicon then a functional isomorph would be conscious? Or is the contention that this is like trying to make steel burn i.e. we’re just never going to be able to replicate the functions in another substrate because the physics precludes it?
We are talking about functionalism—it’s in the title. I am contrasting physical processes with abstract functions.
In ordinary parlance, the function of a physical thing is itself a physical effect...toasters toast, kettles boil, planes fly.
In the philosophy of mind, a function is an abstraction, more like the mathematical sense of a function. In maths, a function takes some inputs and or produces some outputs. Well known examples are familiar arithmetic operations like addition, multiplication , squaring, and so on. But the inputs and outputs are not concrete physical realities. In computation,the inputs and outputs of a functional unit, such as a NAND gate, always have some concrete value, some specific voltage, but not always the same one. Indeed, general Turing complete computers don’t even have to be electrical—they can be implemented in clockwork, hydraulics, photonics, etc.
This is the basis for the idea that a compute programme can be the same as a mind, despite being made of different matter—it implements the same.abstract functions. The abstraction of the abstract, philosopy-of-mind concept of a function is part of its usefulness.
Searle is famous critic of computationalism, and his substitute for it is a biological essentialism in which the generation of consciousness is a brain function—in the concrete sense of function.It’s true that something whose concrete function is to generate consciousness will generate consciousness..but it’s vacuously, trivially true.
The point is that the functions which this physical process is implementing are what’s required for consciousness not the actual physical properties themselves.
If you mean that abstract, computational functions are known to be sufficient to give rise to all.asoexs of consciousness including qualia, that is what I am contesting.
I think I’m more optimistic than you that a moderately accurate functional isomorph of the brain could be built which preserves consciousness (largely due to the reasons I mentioned in my previous comment around robustness.
I’m less optimistic because of my.arguments.
But putting this aside for a second, would you agree that if all the relevant functions could be implemented in silicon then a functional isomorph would be conscious?
No, not necessarily. That , in the “not necessary” form—is what I’ve been arguing all along. I also don’t think that consciousnes had a single meaning , or that there is a agreement about what it means, or that it is a simple binary.
The controversial point is whether consciousness in the hard problem sense—phenomenal consciousness, qualia—will be reproduced with reproduction of function. It’s not controversial that easy problem consciousness—capacities and behaviour—will be reproduced by functional reproduction. I don t know which you believe, because you are only talking about consciousness not otherwise specified.
If you do mean that a functional duplicate will necessarily have phenomenal consciousness, and you are arguing the point, not just holding it as an opinion, you have a heavy burden:-
You need to show some theory of how computation generates conscious experience. Or you need to show why the concrete physical implementation couldn’t possibly make a difference.
@rife
Yes, I’m specifically focused on the behaviour of an honest self-report
Well,. you’re not rejecting phenomenal consciousness wholesale.
fine-grained information becomes irrelevant implementation details. If the neuron still fires, or doesn’t, smaller noise doesn’t matter. The only reason I point this out is specifically as it applies to the behaviour of a self-report (which we will circle back to in a moment). If it doesn’t materially effect the output so powerfully that it alters that final outcome, then it is not responsible for outward behaviour.
But outward behaviour is not what I am talking about. The question is whether functional duplication preserves (full) consciousness. And, as I have said, physicalism is not just about fine grained details. There’s also the basic fact of running on the metal
I’m saying that we have ruled out that a functional duplicate could lack conscious experience because: we have established conscious experience as part of the causal chain
“In humans”. Even if it’s always the case that qualia are causal in humans, it doesn’t follow that reports of qualia in any entity whatsoever are caused by qualia. Yudkowsky’s argument is no help here, because he doesn’t require reports of consciousness to be *directly” caused by consciousness—a computational zombies reports would be caused , not by it’s own consciousness, but by the programming and data created by humans.
to be able to feel something and then output a description through voice or typing that is based on that feeling. If conscious experience was part of that causal chain, and the causal chain consists purely of neuron firings, then conscious experience is contained in that functionality.
Neural firings are specific physical behaviour, not abstract function. Computationalism is about abstract function
I understand that there’s a difference between abstract functions and physical functions.
For example, abstractly we could imagine a NAND gate as a truth table—not specifying real voltages and hardware. But in a real system we’d need to implement the NAND gate on a circuit board with specific voltage thresholds, wires etc..
Functionalism is obviously a broad church, but it is not true that a functionalist needs to be tied to the idea that abstract functions alone are sufficient for consciousness. Indeed, I’d argue that this isn’t a common position among functionalists at all. Rather, they’d typically say something like a physically realised functional process described at a certain level of abstraction is sufficient for consciousness.
To be clear, by “function” I don’t mean some purely mathematical mapping divorced from any physical realisation. I’m talking about the physically instantiated causal/functional roles. I’m not claiming that a simulation would do the job.
If you mean that abstract, computational functions are known to be sufficient to give rise to all.asoexs of consciousness including qualia, that is what I am contesting.
This is trivially true, there is a hard problem of consciousness that is, well, hard. I don’t think I’ve said that computational functions are known to be sufficient for generating qualia. I’ve said if you already believe this then you should take the possibilty of AI consciousness more seriously.
No, not necessarily. That , in the “not necessary” form—is what I’ve been arguing all along. I also don’t think that consciousnes had a single meaning , or that there is a agreement about what it means, or that it is a simple binary.
Makes sense, thanks for engaging with the question.
If you do mean that a functional duplicate will necessarily have phenomenal consciousness, and you are arguing the point, not just holding it as an opinion, you have a heavy burden:-You need to show some theory of how computation generates conscious experience. Or you need to show why the concrete physical implementation couldn’t possibly make a difference.
It’s an opinion. I’m obviously not going to be able to solve the Hard Problem of Consciousness in a comment section. In any case, I appreciate the exchange. I’m aware that neither of us can solve the Hard Problem here, but hopefully this clarifies the spirit of my position.
Thank you for the comment.
I take your point around substrate independence being a conclusion of computationalism rather than independent evidence for it—this is a fair criticism.
If I’m interpreting your argument correctly, there are two possibilities:
1. Biological structures happen to implement some function which produces consciousness [Functionalism]
2. Biological structures have some physical property X which produces consciousness. [Biological Essentialism or non-Computationalist Physicalism]
Your argument seems to be that 2) has more explanatory power because it has access to all of the potential physical processes underlying biology to try to explain consciousness whereas 1) is restricted to the functions that the biological systems implement. Have I captured the argument correctly? (please let me know if I haven’t)
Imagine that we could successfully implement a functional isomorph of the human brain in silicon. A proponent of 2) would need to explain why this functional isomorph of the human brain which has all the same functional properties as an actual brain does not, in fact, have consciousness. A proponent of 1) could simply assert that it does. It’s not clear to me what property X the biological brain has which induces consciousness which couldn’t be captured by a functional isomorph in silicon. I know there’s been some recent work by Anil Seth where he tries to pin down the properties X which biological systems may require for consciousness https://osf.io/preprints/psyarxiv/tz6an. His argument suggests that extremely complex biological systems may implement functions which are non-Turing computable. However, while he identifies biological properties that would be difficult to implement in silicon, I didn’t find this sufficient evidence for the claim that brains perform non-Turing computable functions.. Do you have any ideas?
I’ll admit that modern day LLM’s are nowhere near functional isomorphs of the human brain so it could be that there’s some “functional gap” between their implementation and the human brain. So it could indeed be that LLM’s are not consciousness because they are missing some “important function” required for consciousness. My contention in this post is that if they’re able to reason about their internal experience and qualia in a sophisticated manner then this is at least circumstantial evidence that they’re not missing the “important function.”
Physicalism can do that easily,.because it implies that there can be something special about running running unsimulated , on bare metal.
Computationalism, even very fine grained computationalism, isn’t a direct consequence of physicalism. Physicalism has it that an exact atom-by-atom duplicate of a person will be a person and not a zombie, because there is no nonphysical element to go missing. That’s the argument against p-zombies. But if actually takes an atom-by-atom duplication to achieve human functioning, then the computational theory of mind will be false, because CTM implies that the same algorithm running on different hardware will be sufficient. Physicalism doesn’t imply computationalism, and arguments against p-zombies don’t imply the non existence of c-zombies—unconscious duplicates that are identical computationally, but not physically.
So it is possible,given physicalism , for qualia to depend on the real physics , the physical level of granularity, not on the higher level of granularity that is computation.
It presupposes computationalism to assume that the only possible defeater for a computational theory is the wrong kind of computation.
There’s no evidence that they are not stochastic-parrotting , since their training data wasn’t pruned of statements about consciousness.
If the claim of consciouness is based on LLMs introspecting their own qualia and report on them , there’s no clinching evidence they are doing so at all. You’ve got the fact computational functionalism isn’t necessarily true, the fact that TT type investigations don’t pin down function, and the fact that there is another potential explanation diverge results.
Ok, I think I can see where we’re diverging a little clearer now. The non-computational physicalist position seem to postulate that consciousness requires a physical property X and the presence or absence of this physical property is what determines consciousness—i.e. it’s what the system is that is important for consciousness, not what the system does.
I don’t find this position compelling for several reasons:
First, if consciousness really required extremely precise physical conditions—so precise that we’d need atom-by-atom level duplication to preserve it, we’d expect it to be very fragile. Yet consciousness is actually remarkably robust: it persists through significant brain damage, chemical alterations (drugs and hallucinogens) and even as neurons die and are replaced. We also see consciousness in different species with very different neural architectures. Given this robustness, it seems more natural to assume that consciousness is about maintaining what the state is doing (implementing feedback loops, self-models, integrating information etc.) rather than their exact physical state.
Second, consider what happens during sleep or under anaesthesia. The physical properties of our brains remain largely unchanged, yet consciousness is dramatically altered or absent. Immediately after death (before decay sets in), most physical properties of the brain are still present, yet consciousness is gone. This suggests consciousness tracks what the brain is doing (its functions) rather than what it physically is. The physical structure has not changed but the functional patterns have changed or ceased.
I acknowledge that functionalism struggles with the hard problem of consciousness—it’s difficult to explain how subjective experience could emerge from abstract computational processes. However, non-computationalist physicalism faces exactly the same challenge. Simply identifying a physical property common to all conscious systems doesn’t explain why that property gives rise to subjective experience.
This is fair. It’s possible that some physical properties would prevent the implementation of a functional isomorph. As Anil Seth identifies, there are complex biological processes like nitrous oxide diffusing across cell membranes which would be specifically difficult to implement in artificial systems and might be important to perform the functions required for consciousness (on functionalism).
I agree. The ACT (AI Consciousness Test) specifically requires AI to be “boxed-in” in pre-training to offer a conclusive test of consciousness. Modern LLMs are not boxed-in in this way (I mentioned this in the post). The post is merely meant to argue something like “If you accept functionalism, you should take the possibility of conscious LLMs more seriously”.
Don’t assume that then. Minimally, non computation physicalism only requires that the physical substrate makes some sort of difference. Maybe approximate physical resemblance results in approximate qualia.
You seem to be assuming a maximally coarse-grained either-conscious-or-not model.
If you allow for fine grained differences in functioning and behaviour , all those things produce fine grained differences. There would be no point in administering anaesthesia if it made no difference to consciousness. Likewise ,there would be no point in repairing brain injuries. Are you thinking of consciousness as a synonym for personhood?
We don’t see that they have the same kind of level of consciousness.
Stability is nothing like a sufficient explabation of consciousness, particularly the hard problem of conscious experience...even if it is necessary.But it isn’t necessary either , as the cycle of sleep and waking tells all of us every day.
Obviously the electrical and chemical activity changes. You are narrowing “physical” to “connectome”. Physcalism is compatible with the idea that specific kinds of physical.acriviry are crucial.
No, physical behaviour isn’t function. Function is abstract, physical behaviour is concrete. Flight simulators functionally duplicate flight without flying. If function were not abstract, functionalism would not lead to substrate independence. You can build a model of ion channels and synaptic clefts, but the modelled sodium ions aren’t actual sodium ion, and if the universe cares about activity being implemented by actual sodium ions, your model isn’t going to be conscious
Physical activity is physical.
I never said it did. I said it had more resources. It’s badly off, but not as badly off.
If we can see that someone is a human, we know that they gave a high degree of biological similarity. So webl have behavioural similarity, and biological similarity, and it’s not obvious how much lifting each is doing.
@rife
Well, the externally visible outcome is.
In humans.
So far that tells us that epiphenomenalism is wrong, not that functionalism is right.
What does “encapsulates”means? Are you saying that fine grained information gets lost? Note that the basic fact of running on the metal is not lost.
Yes. That doesn’t mean the experience is, because a computational Zombie will produce the same outputs even if it lacks consciousness, uncoincidentally.
A computational duplicate of a believer in consciousness and qualia will continue to state that it has them , whether it does or not, because its a computational duplicate , so it produces the same output in response to the same input
You haven’t eliminated the possibility of a functional duplicate still being a functional duplicate if it lacks conscious experience.
Basically
Epiphenenomenalism
Coincidence
Functionalism
Aren’t the only options.
Yes, I’m specifically focused on the behaviour of an honest self-report
fine-grained information becomes irrelevant implementation details. If the neuron still fires, or doesn’t, smaller noise doesn’t matter. The only reason I point this out is specifically as it applies to the behaviour of a self-report (which we will circle back to in a moment). If it doesn’t materially effect the output so powerfully that it alters that final outcome, then it is not responsible for outward behaviour.
I’m saying that we have ruled out that a functional duplicate could lack conscious experience because:
we have established conscious experience as part of the causal chain to be able to feel something and then output a description through voice or typing that is based on that feeling. If conscious experience was part of that causal chain, and the causal chain consists purely of neuron firings, then conscious experience is contained in that functionality.
We can’t invoke the idea that smaller details (than neuron firings) are where consciousness manifests, because unless those smaller details affect neuronal firing patterns enough to cause the subject to speak about what it feels like to be sentient, then they are not part of that causal chain, which sentience must be a part of.
I think we might actually be agreeing (or ~90% overlapping) and just using different terminology.
Right. We’re talking about “physical processes” rather than static physical properties. I.e. Which processes are important for consciousness to be implemented and can the physics support these processes?
The flight simulator doesn’t implement actual aerodynamics (it’s not implementing the required functions to generate lift) but this isn’t what we’re arguing. A better analogy might be to compare a birds wing to a steel aeroplane wing, both implement the actual physical process required for flight (generating lift through certain airflow patterns) just with different materials.
Similarly, a wooden rod can burn in fire whereas a steel rod can’t. This is because the physics of the material are preventing a certain function (oxidation) from being implemented.
So when we’re imagining a functional isomorph of the brain which has been built using silicon this presupposes that silicon can actually replicate all of the required functions with its specific physics. As you’ve pointed out, this is a big if! There are physical processes (such as Nitrous Oxide diffusion across the cell membranes) which might be impossible to implement in silicon and fundamentally important for consciousness.
I don’t disagree! The point is that the functions which this physical process is implementing are what’s required for consciousness not the actual physical properties themselves.
I think I’m more optimistic than you that a moderately accurate functional isomorph of the brain could be built which preserves consciousness (largely due to the reasons I mentioned in my previous comment around robustness.) But putting this aside for a second, would you agree that if all the relevant functions could be implemented in silicon then a functional isomorph would be conscious? Or is the contention that this is like trying to make steel burn i.e. we’re just never going to be able to replicate the functions in another substrate because the physics precludes it?
We are talking about functionalism—it’s in the title. I am contrasting physical processes with abstract functions.
In ordinary parlance, the function of a physical thing is itself a physical effect...toasters toast, kettles boil, planes fly.
In the philosophy of mind, a function is an abstraction, more like the mathematical sense of a function. In maths, a function takes some inputs and or produces some outputs. Well known examples are familiar arithmetic operations like addition, multiplication , squaring, and so on. But the inputs and outputs are not concrete physical realities. In computation,the inputs and outputs of a functional unit, such as a NAND gate, always have some concrete value, some specific voltage, but not always the same one. Indeed, general Turing complete computers don’t even have to be electrical—they can be implemented in clockwork, hydraulics, photonics, etc.
This is the basis for the idea that a compute programme can be the same as a mind, despite being made of different matter—it implements the same.abstract functions. The abstraction of the abstract, philosopy-of-mind concept of a function is part of its usefulness.
Searle is famous critic of computationalism, and his substitute for it is a biological essentialism in which the generation of consciousness is a brain function—in the concrete sense of function.It’s true that something whose concrete function is to generate consciousness will generate consciousness..but it’s vacuously, trivially true.
If you mean that abstract, computational functions are known to be sufficient to give rise to all.asoexs of consciousness including qualia, that is what I am contesting.
I’m less optimistic because of my.arguments.
No, not necessarily. That , in the “not necessary” form—is what I’ve been arguing all along. I also don’t think that consciousnes had a single meaning , or that there is a agreement about what it means, or that it is a simple binary.
The controversial point is whether consciousness in the hard problem sense—phenomenal consciousness, qualia—will be reproduced with reproduction of function. It’s not controversial that easy problem consciousness—capacities and behaviour—will be reproduced by functional reproduction. I don t know which you believe, because you are only talking about consciousness not otherwise specified.
If you do mean that a functional duplicate will necessarily have phenomenal consciousness, and you are arguing the point, not just holding it as an opinion, you have a heavy burden:-
You need to show some theory of how computation generates conscious experience. Or you need to show why the concrete physical implementation couldn’t possibly make a difference.
@rife
Well,. you’re not rejecting phenomenal consciousness wholesale.
But outward behaviour is not what I am talking about. The question is whether functional duplication preserves (full) consciousness. And, as I have said, physicalism is not just about fine grained details. There’s also the basic fact of running on the metal
“In humans”. Even if it’s always the case that qualia are causal in humans, it doesn’t follow that reports of qualia in any entity whatsoever are caused by qualia. Yudkowsky’s argument is no help here, because he doesn’t require reports of consciousness to be *directly” caused by consciousness—a computational zombies reports would be caused , not by it’s own consciousness, but by the programming and data created by humans.
Neural firings are specific physical behaviour, not abstract function. Computationalism is about abstract function
I understand that there’s a difference between abstract functions and physical functions. For example, abstractly we could imagine a NAND gate as a truth table—not specifying real voltages and hardware. But in a real system we’d need to implement the NAND gate on a circuit board with specific voltage thresholds, wires etc..
Functionalism is obviously a broad church, but it is not true that a functionalist needs to be tied to the idea that abstract functions alone are sufficient for consciousness. Indeed, I’d argue that this isn’t a common position among functionalists at all. Rather, they’d typically say something like a physically realised functional process described at a certain level of abstraction is sufficient for consciousness.
To be clear, by “function” I don’t mean some purely mathematical mapping divorced from any physical realisation. I’m talking about the physically instantiated causal/functional roles. I’m not claiming that a simulation would do the job.
This is trivially true, there is a hard problem of consciousness that is, well, hard. I don’t think I’ve said that computational functions are known to be sufficient for generating qualia. I’ve said if you already believe this then you should take the possibilty of AI consciousness more seriously.
Makes sense, thanks for engaging with the question.
It’s an opinion. I’m obviously not going to be able to solve the Hard Problem of Consciousness in a comment section. In any case, I appreciate the exchange. I’m aware that neither of us can solve the Hard Problem here, but hopefully this clarifies the spirit of my position.