As a clarification, I’m working with the following map:
Abstract functionalism (or computational functionalism) - the idea that consciousness is equivalent to computations or abstractly instantiated functions.
Physical functionalism (or causal-role functionalism) - the idea that consciousness is equivalent to physically instantiated functions at a relevant level of abstraction.
I agree with everything you’ve written against 1) in this comment and the other comment so will focus on defending 2).
If I understand the crux of your challenge to 2), you’re essentially saying that once we admit physical instantiation matters (e.g. cosmic rays can affect computations, steel vs birds wings have different energy requirements) then we’re on a slippery slope because each physical difference we admit further constrains what counts as the “same function” until we’re potentially only left with the exact physical system itself. Is this an accurate gloss of your challenge?
Assuming it is, I have a couple of responses:
I actually agree with this to an extent. There will always be some important physical differences between states unless they’re literally physically identical at a token level. The important thing is to figure out which level of abstraction is relevant for the particular “thing” we’re trying to pin down. We shouldn’t commit ourselves to insisting that systems which are not physically identical can’t be grouped in a meaningful way.
On my view, we can’t need an exact physical duplicate to reflect presence/absence of consciousness because consciousness is so remarkably robust. The presence of consciousness persists over multiple time-steps in which all manner of noise, thermal fluctuations and neural plasticity occur. What changes is the content/character of consciousness—but consciousness persists because of robust higher-level patterns not because of exact microphysical configurations.
And maybe, just maybe, you need to consider what the physical substrate actually does instead of writing down imperfect abstract mathematical approximations of it.
Again, I agree that not every physical substrate can support every function (I gave the example of combustion not being supported in steel above.) If the physical substrate prevents certain causal relations from occurring then this is a perfectly valid reason for it not to support consciousness. For example, I could imagine that it’s physically impossible to build embodied robot AI systems which pass behavioural tests for consciousness because the energy constraints don’t permit it or whatever. My point is that in the event where such a system is physically possible then it is conscious.
To determine if we actually converge or if there’s a fundamental difference in our views: Would you agree that if it’s possible in principle to build a silicon replica of a brain at whatever the relevant level of abstraction for consciousness is (whether coarse-grained functional level, neuron-level, sub-neuron level or whatever) then the silicon replica would actually be conscious?
If you agree here, or if you insist that such a replica might not be physically possible to build then I think our views converge. If you disagree then I think we have a fundamental difference about what constitutes consciousness.
Would you agree that if it’s possible in principle to build a silicon replica of a brain at whatever the relevant level of abstraction for consciousness is (whether coarse-grained functional level, neuron-level, sub-neuron level or whatever) then the silicon replica would actually be conscious?
I will say I do believe, in general, that we simply need a much better understanding of what “consciousness” means before we can reason more precisely about these topics. Certain ontologies can assign short encodings to concepts that are either ultimately confused or at the very least don’t carve reality at the joints.
We typically generalize from one example when it comes to consciousness and subjectivity: “we’re conscious, so therefore there must be some natural concept consciousness refers to” is how the argument goes. And we reject solipsism because we look at other human beings and notice that they act similarly to us and seem to possess the same internal structure as us, so we think we can safely say that they must also have an “internal life” of subjectivity, just like us. That’s all fine and good. But when we move outside of that narrow, familiar domain and try to reason about stuff our intuition was not built for, that’s when the tails come apart and stuff can get very weird.
But overall, I don’t think we disagree about too much here. I wouldn’t talk about this topic in the terms you chose, and perhaps this does reflect some delta between us, but it’s probably not a major point of disagreement.
As to the additional question of identity, namely whether that replica is the same consciousness as that which it’s meant to replicate… I’d still say no. But that doesn’t seem to be what you’re focused on here.
As a clarification, I’m working with the following map:
Abstract functionalism (or computational functionalism) - the idea that consciousness is equivalent to computations or abstractly instantiated functions.
Physical functionalism (or causal-role functionalism) - the idea that consciousness is equivalent to physically instantiated functions at a relevant level of abstraction.
I agree with everything you’ve written against 1) in this comment and the other comment so will focus on defending 2).
If I understand the crux of your challenge to 2), you’re essentially saying that once we admit physical instantiation matters (e.g. cosmic rays can affect computations, steel vs birds wings have different energy requirements) then we’re on a slippery slope because each physical difference we admit further constrains what counts as the “same function” until we’re potentially only left with the exact physical system itself. Is this an accurate gloss of your challenge?
Assuming it is, I have a couple of responses:
I actually agree with this to an extent. There will always be some important physical differences between states unless they’re literally physically identical at a token level. The important thing is to figure out which level of abstraction is relevant for the particular “thing” we’re trying to pin down. We shouldn’t commit ourselves to insisting that systems which are not physically identical can’t be grouped in a meaningful way.
On my view, we can’t need an exact physical duplicate to reflect presence/absence of consciousness because consciousness is so remarkably robust. The presence of consciousness persists over multiple time-steps in which all manner of noise, thermal fluctuations and neural plasticity occur. What changes is the content/character of consciousness—but consciousness persists because of robust higher-level patterns not because of exact microphysical configurations.
Again, I agree that not every physical substrate can support every function (I gave the example of combustion not being supported in steel above.) If the physical substrate prevents certain causal relations from occurring then this is a perfectly valid reason for it not to support consciousness. For example, I could imagine that it’s physically impossible to build embodied robot AI systems which pass behavioural tests for consciousness because the energy constraints don’t permit it or whatever. My point is that in the event where such a system is physically possible then it is conscious.
To determine if we actually converge or if there’s a fundamental difference in our views: Would you agree that if it’s possible in principle to build a silicon replica of a brain at whatever the relevant level of abstraction for consciousness is (whether coarse-grained functional level, neuron-level, sub-neuron level or whatever) then the silicon replica would actually be conscious?
If you agree here, or if you insist that such a replica might not be physically possible to build then I think our views converge. If you disagree then I think we have a fundamental difference about what constitutes consciousness.
Yeah, I think this is pretty likely.[1]
I will say I do believe, in general, that we simply need a much better understanding of what “consciousness” means before we can reason more precisely about these topics. Certain ontologies can assign short encodings to concepts that are either ultimately confused or at the very least don’t carve reality at the joints.
We typically generalize from one example when it comes to consciousness and subjectivity: “we’re conscious, so therefore there must be some natural concept consciousness refers to” is how the argument goes. And we reject solipsism because we look at other human beings and notice that they act similarly to us and seem to possess the same internal structure as us, so we think we can safely say that they must also have an “internal life” of subjectivity, just like us. That’s all fine and good. But when we move outside of that narrow, familiar domain and try to reason about stuff our intuition was not built for, that’s when the tails come apart and stuff can get very weird.
But overall, I don’t think we disagree about too much here. I wouldn’t talk about this topic in the terms you chose, and perhaps this does reflect some delta between us, but it’s probably not a major point of disagreement.
As to the additional question of identity, namely whether that replica is the same consciousness as that which it’s meant to replicate… I’d still say no. But that doesn’t seem to be what you’re focused on here.