Separately from my other comment, I have responses here as well.
When we talk about a function it can be instantiated in two ways: abstractly and physically. On this view there’s a meaningful difference between an abstract instantiation of a function, such as a disembodied truth table representing a NAND gate and a physical instantiation of a NAND gate e.g. on a circuit board with wires and voltages etc..
I think it would be non-physicalist if (to slightly modify the analogy, for illustrative purposes) you say that a computer program I run on my laptop can be identified with the Python code it implements, because it is not actually what happens.
We can see this as a result of stuff like single-event upsets, i.e., for example, situations in which stray cosmic rays modify the bits in a transistor in the physical entity that runs the code (i.e., the laptop) in such a manner that it fundamentally changes the output of the program. So the running of the program (instantiated and embedded in the real, physical world just like a human is) works not on the basis of the lossy model that only takes into account the “software” part, but rather on the “hardware” itself.
You can of course expand the idea of “computation” to say that, actually, it takes into account the stray cosmic rays as well, and in fact it takes into account everything that can affect the output, at which point “computation” stops being a subset of “what happens” and becomes the entirety of it. So if you want to say that the computation necessarily involves the entirety of what is physically there, then I believe I agree, at which point this is no longer the computationalist thesis argued for by Rob, Ruben etc (for example, the corolaries about WBE preserving identity when only an augmented part of the brain’s connectome is scanned no longer hold).
The basic point, which I think I come back to over and over again,[1] is that the tails come apart when we get down to the nuts and bolts of the fundamental nature of reality if we continue to talk about important topics imprecisely.[2]In a casual conversationwith an SWE friend, it’s fine and even desirable to conflate the two instantiations of a function. Not because it’s correct, mind you, but because the way it’s incorrect is likely irrelevant to the topic at hand[3] and being super careful about it wastes time and sounds pedantic. But when you zoom in super deeply and try to get at the core of what makes existence tick, not only are we in an area where we don’t have nearly as much knowledge to justify certainty of this kind, but also any errors we make in our use or delineation of concepts get magnified and lead us down the wrong tracks. We have to bind ourselves tight to reality, which is very hard to do in such a confusing domain.
But this doesn’t defeat functionalism it just shows that abstract instantiation of the function is not enough.
Sure sounds like a defeat of (that particular example of) functionalism to me! You need more than just the lossy compression you’re using. You need to pay attention to the map-territory distinction and abstain from elevating a particular theory to undeserved heights. And maybe, just maybe, you need to consider what the physical substrate actually does instead of writing down imperfect abstract mathematical approximations of it.
For example, consider a steel wing and a birds wing generating lift. The steel wing has vastly different kinetic energy requirements but the aerodynamics still works because steel can support the function. Contrast this with combustion—steel can’t burn like wood because it lacks the right chemical energy profile.
Steel wings and bird wings are similar in some ways. They also differ in some ways, as you pointed out. The more differences you recognize, the more you constrain the possibility space for what kinds of mathematical structures you can build from the ground up to ensure all the differences (and similarities) are faithfully represented.[4] What’s to say that once you consider all the differences,[5] you’re left with nothing but… well, everything? The entire physical instantiation of what’s going on?
The distinction matters because people often use the “simulations lack physical properties” argument to dismiss abstract functionalism and then tie themselves in knots trying to understand whether a physically embodied AI robot system could be conscious when they haven’t defeated physical functionalism.
As a general matter, I’m not entirely certain what people are truly pointing to in the territory when they say “consciousness.”[6] I don’t know what its True Name is. I’m worried it doesn’t really refer to much once we dissolve our confusions. Whenever I think of this topic, I’m always reminded of this absolutely excellent lc post:
It is both absurd, and intolerably infuriating, just how many people on this forum think it’s acceptable to claim they have figured out how qualia/consciousness works, and also not explain how one would go about making my laptop experience an emotion like ‘nostalgia’, or present their framework for enumerating the set of all possible qualitative experiences. When it comes to this particular subject, rationalists are like crackpot physicists with a pet theory of everything, except rationalists go “Huh? Gravity?” when you ask them to explain how their theory predicts gravity, and then start arguing with you about gravity needing to be something explained by a theory of everything. You people make me want to punch my drywall sometimes.
For the record: the purpose of having a “theory of consciousness” is so it can tell us which blobs of matter feel particular things under which specific circumstances, and teach others how to make new blobs of matter that feel particular things. Down to the level of having a field of AI anaesthesiology. If your theory of consciousness does not do this, perhaps because the sum total of your brilliant insights are “systems feel ‘things’ when they’re, y’know, smart, and have goals. Like humans!”, then you have embarassingly missed the mark.
I suppose my interest in lc’s questions means I do care about the stuff @Algonmentioned, at least to some extent.
And we know this because we have a tremendous amount of background knowledge amassed by experts over the decades that give us a detailed, mechanistic explanation of how unlikely this is to matter.
As a clarification, I’m working with the following map:
Abstract functionalism (or computational functionalism) - the idea that consciousness is equivalent to computations or abstractly instantiated functions.
Physical functionalism (or causal-role functionalism) - the idea that consciousness is equivalent to physically instantiated functions at a relevant level of abstraction.
I agree with everything you’ve written against 1) in this comment and the other comment so will focus on defending 2).
If I understand the crux of your challenge to 2), you’re essentially saying that once we admit physical instantiation matters (e.g. cosmic rays can affect computations, steel vs birds wings have different energy requirements) then we’re on a slippery slope because each physical difference we admit further constrains what counts as the “same function” until we’re potentially only left with the exact physical system itself. Is this an accurate gloss of your challenge?
Assuming it is, I have a couple of responses:
I actually agree with this to an extent. There will always be some important physical differences between states unless they’re literally physically identical at a token level. The important thing is to figure out which level of abstraction is relevant for the particular “thing” we’re trying to pin down. We shouldn’t commit ourselves to insisting that systems which are not physically identical can’t be grouped in a meaningful way.
On my view, we can’t need an exact physical duplicate to reflect presence/absence of consciousness because consciousness is so remarkably robust. The presence of consciousness persists over multiple time-steps in which all manner of noise, thermal fluctuations and neural plasticity occur. What changes is the content/character of consciousness—but consciousness persists because of robust higher-level patterns not because of exact microphysical configurations.
And maybe, just maybe, you need to consider what the physical substrate actually does instead of writing down imperfect abstract mathematical approximations of it.
Again, I agree that not every physical substrate can support every function (I gave the example of combustion not being supported in steel above.) If the physical substrate prevents certain causal relations from occurring then this is a perfectly valid reason for it not to support consciousness. For example, I could imagine that it’s physically impossible to build embodied robot AI systems which pass behavioural tests for consciousness because the energy constraints don’t permit it or whatever. My point is that in the event where such a system is physically possible then it is conscious.
To determine if we actually converge or if there’s a fundamental difference in our views: Would you agree that if it’s possible in principle to build a silicon replica of a brain at whatever the relevant level of abstraction for consciousness is (whether coarse-grained functional level, neuron-level, sub-neuron level or whatever) then the silicon replica would actually be conscious?
If you agree here, or if you insist that such a replica might not be physically possible to build then I think our views converge. If you disagree then I think we have a fundamental difference about what constitutes consciousness.
Would you agree that if it’s possible in principle to build a silicon replica of a brain at whatever the relevant level of abstraction for consciousness is (whether coarse-grained functional level, neuron-level, sub-neuron level or whatever) then the silicon replica would actually be conscious?
I will say I do believe, in general, that we simply need a much better understanding of what “consciousness” means before we can reason more precisely about these topics. Certain ontologies can assign short encodings to concepts that are either ultimately confused or at the very least don’t carve reality at the joints.
We typically generalize from one example when it comes to consciousness and subjectivity: “we’re conscious, so therefore there must be some natural concept consciousness refers to” is how the argument goes. And we reject solipsism because we look at other human beings and notice that they act similarly to us and seem to possess the same internal structure as us, so we think we can safely say that they must also have an “internal life” of subjectivity, just like us. That’s all fine and good. But when we move outside of that narrow, familiar domain and try to reason about stuff our intuition was not built for, that’s when the tails come apart and stuff can get very weird.
But overall, I don’t think we disagree about too much here. I wouldn’t talk about this topic in the terms you chose, and perhaps this does reflect some delta between us, but it’s probably not a major point of disagreement.
As to the additional question of identity, namely whether that replica is the same consciousness as that which it’s meant to replicate… I’d still say no. But that doesn’t seem to be what you’re focused on here.
Separately from my other comment, I have responses here as well.
Indeed. As I wrote about here:
The basic point, which I think I come back to over and over again,[1] is that the tails come apart when we get down to the nuts and bolts of the fundamental nature of reality if we continue to talk about important topics imprecisely.[2] In a casual conversation with an SWE friend, it’s fine and even desirable to conflate the two instantiations of a function. Not because it’s correct, mind you, but because the way it’s incorrect is likely irrelevant to the topic at hand[3] and being super careful about it wastes time and sounds pedantic. But when you zoom in super deeply and try to get at the core of what makes existence tick, not only are we in an area where we don’t have nearly as much knowledge to justify certainty of this kind, but also any errors we make in our use or delineation of concepts get magnified and lead us down the wrong tracks. We have to bind ourselves tight to reality, which is very hard to do in such a confusing domain.
Sure sounds like a defeat of (that particular example of) functionalism to me! You need more than just the lossy compression you’re using. You need to pay attention to the map-territory distinction and abstain from elevating a particular theory to undeserved heights. And maybe, just maybe, you need to consider what the physical substrate actually does instead of writing down imperfect abstract mathematical approximations of it.
Steel wings and bird wings are similar in some ways. They also differ in some ways, as you pointed out. The more differences you recognize, the more you constrain the possibility space for what kinds of mathematical structures you can build from the ground up to ensure all the differences (and similarities) are faithfully represented.[4] What’s to say that once you consider all the differences,[5] you’re left with nothing but… well, everything? The entire physical instantiation of what’s going on?
As a general matter, I’m not entirely certain what people are truly pointing to in the territory when they say “consciousness.”[6] I don’t know what its True Name is. I’m worried it doesn’t really refer to much once we dissolve our confusions. Whenever I think of this topic, I’m always reminded of this absolutely excellent lc post:
I suppose my interest in lc’s questions means I do care about the stuff @Algon mentioned, at least to some extent.
But probably haven’t really spelled out in so many words yet.
Or by reifying concepts that don’t carve reality at the joints.
And we know this because we have a tremendous amount of background knowledge amassed by experts over the decades that give us a detailed, mechanistic explanation of how unlikely this is to matter.
A greater complexity, more epicycles, if you will.
Which is relevant here and not in other discussions about other topics because, as I explained above, this is a qualitatively different domain.
And I claim they are equally uncertain (or, in most cases, should be equally or even less certain than me when it comes to this).
As a clarification, I’m working with the following map:
Abstract functionalism (or computational functionalism) - the idea that consciousness is equivalent to computations or abstractly instantiated functions.
Physical functionalism (or causal-role functionalism) - the idea that consciousness is equivalent to physically instantiated functions at a relevant level of abstraction.
I agree with everything you’ve written against 1) in this comment and the other comment so will focus on defending 2).
If I understand the crux of your challenge to 2), you’re essentially saying that once we admit physical instantiation matters (e.g. cosmic rays can affect computations, steel vs birds wings have different energy requirements) then we’re on a slippery slope because each physical difference we admit further constrains what counts as the “same function” until we’re potentially only left with the exact physical system itself. Is this an accurate gloss of your challenge?
Assuming it is, I have a couple of responses:
I actually agree with this to an extent. There will always be some important physical differences between states unless they’re literally physically identical at a token level. The important thing is to figure out which level of abstraction is relevant for the particular “thing” we’re trying to pin down. We shouldn’t commit ourselves to insisting that systems which are not physically identical can’t be grouped in a meaningful way.
On my view, we can’t need an exact physical duplicate to reflect presence/absence of consciousness because consciousness is so remarkably robust. The presence of consciousness persists over multiple time-steps in which all manner of noise, thermal fluctuations and neural plasticity occur. What changes is the content/character of consciousness—but consciousness persists because of robust higher-level patterns not because of exact microphysical configurations.
Again, I agree that not every physical substrate can support every function (I gave the example of combustion not being supported in steel above.) If the physical substrate prevents certain causal relations from occurring then this is a perfectly valid reason for it not to support consciousness. For example, I could imagine that it’s physically impossible to build embodied robot AI systems which pass behavioural tests for consciousness because the energy constraints don’t permit it or whatever. My point is that in the event where such a system is physically possible then it is conscious.
To determine if we actually converge or if there’s a fundamental difference in our views: Would you agree that if it’s possible in principle to build a silicon replica of a brain at whatever the relevant level of abstraction for consciousness is (whether coarse-grained functional level, neuron-level, sub-neuron level or whatever) then the silicon replica would actually be conscious?
If you agree here, or if you insist that such a replica might not be physically possible to build then I think our views converge. If you disagree then I think we have a fundamental difference about what constitutes consciousness.
Yeah, I think this is pretty likely.[1]
I will say I do believe, in general, that we simply need a much better understanding of what “consciousness” means before we can reason more precisely about these topics. Certain ontologies can assign short encodings to concepts that are either ultimately confused or at the very least don’t carve reality at the joints.
We typically generalize from one example when it comes to consciousness and subjectivity: “we’re conscious, so therefore there must be some natural concept consciousness refers to” is how the argument goes. And we reject solipsism because we look at other human beings and notice that they act similarly to us and seem to possess the same internal structure as us, so we think we can safely say that they must also have an “internal life” of subjectivity, just like us. That’s all fine and good. But when we move outside of that narrow, familiar domain and try to reason about stuff our intuition was not built for, that’s when the tails come apart and stuff can get very weird.
But overall, I don’t think we disagree about too much here. I wouldn’t talk about this topic in the terms you chose, and perhaps this does reflect some delta between us, but it’s probably not a major point of disagreement.
As to the additional question of identity, namely whether that replica is the same consciousness as that which it’s meant to replicate… I’d still say no. But that doesn’t seem to be what you’re focused on here.