Against functionalism: a self dialogue
Context: I had two clashing intuitions on functionalism. This self-dialogue is me exploring them for my own sake.
S: It makes no sense to believe in functionalism. Like, why would the causal graph of a conscious mind be conscious?
A: Something determines whether you’re conscious. So if you draw out that causal graph, and replicate it, how would that structure know it isn’t conscious?
S: That’s silly. The causal graph of a bat hitting a ball might describe momentum and position, but if you re-create that graph elsewhere (e.g. on a computer or some re-scaled version of the system) it won’t have that momentum or velocity.
A: OK, so you’re right that you can’t just take anything with an arbitrary property, find its causal structure at some suitably high-level of abstraction, and recreate those properties by making something with that same causal structure. But! You can with some things, e.g. computations of computations are computations.
S: Even that’s not true. A fast computation running in a simulation could be arbitrarily slow.
A: Sure, not literally every property gets ported over. I never claimed it did. You’re invoking properties outside context of the computation’s causal graph for that, though! Like, you don’t describe a given computation with its real running time, when considered as a pure computation. Instead, you consider its time complexity, its description length etc. And those are the same within the simulation as outside.
S: And? You can’t stop whatever physical processes run consciousness from invoking out-of-context properties.
A: OK, let me switch tacks here. Do you claim that, if I started replacing neurons in your brain with stuff that is functionally the same, wrt. the causal graph of consciousness, you’d feel no difference? You’d still be conscious in the same way?
S: I don’t deny that.
A: Doesn’t that mean I win?
S: No! Because it may be that the only thing which is “functionally the same” are other neurons. In which case, who cares? Simulated utopia is dead.
A: I can’t see why you’d expect that to be plausible.
S: I can’t see why you expect that to be implausible.
A: Seems we’re at an impasse, with irreconcilable intuitions.
S: A tale older than time immemorial.
A: Next time we meet, I’ll have a Gedankenexperiment you won’t be able to beat.
S: I look forward to it.
I found this post pretty helpful to crystallise two distinct views that often get conflated. I’ll call them abstract functionalism and physical functionalism. The key confusion comes from treating these as the same view.
When we talk about a function it can be instantiated in two ways: abstractly and physically. On this view there’s a meaningful difference between an abstract instantiation of a function, such as a disembodied truth table representing a NAND gate and a physical instantiation of a NAND gate e.g. on a circuit board with wires and voltages etc..
When S argues:
They’re right that abstract function leaves out some critical physical properties. A simulation of momentum transfer doesn’t actually transfer momentum. But this doesn’t defeat functionalism it just shows that abstract instantiation of the function is not enough.
For example, consider a steel wing and a birds wing generating lift. The steel wing has vastly different kinetic energy requirements but the aerodynamics still works because steel can support the function. Contrast this with combustion—steel can’t burn like wood because it lacks the right chemical energy profile.
When A asks:
They’re appealing to the intuition that physically instantiated functional replicas of neurons would preserve consciousness.
The distinction matters because people often use the “simulations lack physical properties” argument to dismiss abstract functionalism and then tie themselves in knots trying to understand whether a physically embodied AI robot system could be conscious when they haven’t defeated physical functionalism.
Separately from my other comment, I have responses here as well.
Indeed. As I wrote about here:
The basic point, which I think I come back to over and over again,[1] is that the tails come apart when we get down to the nuts and bolts of the fundamental nature of reality if we continue to talk about important topics imprecisely.[2] In a casual conversation with an SWE friend, it’s fine and even desirable to conflate the two instantiations of a function. Not because it’s correct, mind you, but because the way it’s incorrect is likely irrelevant to the topic at hand[3] and being super careful about it wastes time and sounds pedantic. But when you zoom in super deeply and try to get at the core of what makes existence tick, not only are we in an area where we don’t have nearly as much knowledge to justify certainty of this kind, but also any errors we make in our use or delineation of concepts get magnified and lead us down the wrong tracks. We have to bind ourselves tight to reality, which is very hard to do in such a confusing domain.
Sure sounds like a defeat of (that particular example of) functionalism to me! You need more than just the lossy compression you’re using. You need to pay attention to the map-territory distinction and abstain from elevating a particular theory to undeserved heights. And maybe, just maybe, you need to consider what the physical substrate actually does instead of writing down imperfect abstract mathematical approximations of it.
Steel wings and bird wings are similar in some ways. They also differ in some ways, as you pointed out. The more differences you recognize, the more you constrain the possibility space for what kinds of mathematical structures you can build from the ground up to ensure all the differences (and similarities) are faithfully represented.[4] What’s to say that once you consider all the differences,[5] you’re left with nothing but… well, everything? The entire physical instantiation of what’s going on?
As a general matter, I’m not entirely certain what people are truly pointing to in the territory when they say “consciousness.”[6] I don’t know what its True Name is. I’m worried it doesn’t really refer to much once we dissolve our confusions. Whenever I think of this topic, I’m always reminded of this absolutely excellent lc post:
I suppose my interest in lc’s questions means I do care about the stuff @Algon mentioned, at least to some extent.
But probably haven’t really spelled out in so many words yet.
Or by reifying concepts that don’t carve reality at the joints.
And we know this because we have a tremendous amount of background knowledge amassed by experts over the decades that give us a detailed, mechanistic explanation of how unlikely this is to matter.
A greater complexity, more epicycles, if you will.
Which is relevant here and not in other discussions about other topics because, as I explained above, this is a qualitatively different domain.
And I claim they are equally uncertain (or, in most cases, should be equally or even less certain than me when it comes to this).
As a clarification, I’m working with the following map:
Abstract functionalism (or computational functionalism) - the idea that consciousness is equivalent to computations or abstractly instantiated functions.
Physical functionalism (or causal-role functionalism) - the idea that consciousness is equivalent to physically instantiated functions at a relevant level of abstraction.
I agree with everything you’ve written against 1) in this comment and the other comment so will focus on defending 2).
If I understand the crux of your challenge to 2), you’re essentially saying that once we admit physical instantiation matters (e.g. cosmic rays can affect computations, steel vs birds wings have different energy requirements) then we’re on a slippery slope because each physical difference we admit further constrains what counts as the “same function” until we’re potentially only left with the exact physical system itself. Is this an accurate gloss of your challenge?
Assuming it is, I have a couple of responses:
I actually agree with this to an extent. There will always be some important physical differences between states unless they’re literally physically identical at a token level. The important thing is to figure out which level of abstraction is relevant for the particular “thing” we’re trying to pin down. We shouldn’t commit ourselves to insisting that systems which are not physically identical can’t be grouped in a meaningful way.
On my view, we can’t need an exact physical duplicate to reflect presence/absence of consciousness because consciousness is so remarkably robust. The presence of consciousness persists over multiple time-steps in which all manner of noise, thermal fluctuations and neural plasticity occur. What changes is the content/character of consciousness—but consciousness persists because of robust higher-level patterns not because of exact microphysical configurations.
Again, I agree that not every physical substrate can support every function (I gave the example of combustion not being supported in steel above.) If the physical substrate prevents certain causal relations from occurring then this is a perfectly valid reason for it not to support consciousness. For example, I could imagine that it’s physically impossible to build embodied robot AI systems which pass behavioural tests for consciousness because the energy constraints don’t permit it or whatever. My point is that in the event where such a system is physically possible then it is conscious.
To determine if we actually converge or if there’s a fundamental difference in our views: Would you agree that if it’s possible in principle to build a silicon replica of a brain at whatever the relevant level of abstraction for consciousness is (whether coarse-grained functional level, neuron-level, sub-neuron level or whatever) then the silicon replica would actually be conscious?
If you agree here, or if you insist that such a replica might not be physically possible to build then I think our views converge. If you disagree then I think we have a fundamental difference about what constitutes consciousness.
Yeah, I think this is pretty likely.[1]
I will say I do believe, in general, that we simply need a much better understanding of what “consciousness” means before we can reason more precisely about these topics. Certain ontologies can assign short encodings to concepts that are either ultimately confused or at the very least don’t carve reality at the joints.
We typically generalize from one example when it comes to consciousness and subjectivity: “we’re conscious, so therefore there must be some natural concept consciousness refers to” is how the argument goes. And we reject solipsism because we look at other human beings and notice that they act similarly to us and seem to possess the same internal structure as us, so we think we can safely say that they must also have an “internal life” of subjectivity, just like us. That’s all fine and good. But when we move outside of that narrow, familiar domain and try to reason about stuff our intuition was not built for, that’s when the tails come apart and stuff can get very weird.
But overall, I don’t think we disagree about too much here. I wouldn’t talk about this topic in the terms you chose, and perhaps this does reflect some delta between us, but it’s probably not a major point of disagreement.
As to the additional question of identity, namely whether that replica is the same consciousness as that which it’s meant to replicate… I’d still say no. But that doesn’t seem to be what you’re focused on here.
: ) I’m glad you found this helpful. I was unsure whether it would be clear to others.
I wonder what the standard terminology is. Also, I think you could easily swap out “abstract functionalism” for “computationalism” in this dialogue, with a few minor tweaks to wording, and it would still work. Indeed, that’s how I initially wrote it. So using your wording, I’d say this dialogue’s really aiming to crystallize the difference between physical functionalism and other philosophies of mind.
Changed “some scaled” to “or some re-scaled version of the system”. Thanks for making that error salient!
Hadn’t thought of this, though I think the physical functionalist could go either way on whether a physically embodied robot wouldn’t be conscious.
As an aside, I think looking at how neurons actually work would probably resolve the disagreement between my inner A and S. Like, I do think that if we knew that the brain’s functions don’t depend on sub-neuron movements, then the neuron-replacement argument would just work. But since S is meant to simulate @sunwillrise, and they are a high perplexity individual, I may well be wrong on whether they’d find that convincing.
Just clarifying this. A physical functionalist could coherently maintain that it’s not possible to build an embodied AI robot because physics doesn’t allow it. Similar to how a wooden rod can burn but a steel rod can’t because of the physics. But assuming that it’s physically possible to build an embodied AI system which passes behavioural tests of consciousness e.g. self-recognition, cross-modal binding, flexible problem solving etc.. then the physical functionalist would maintain that the system is conscious.
Out of interest, do you or @sunwillrise have any arguments or intuitions that the presence or absence of consciousness turns on sub-neuronal dynamics?
Consciousness appears across radically different neural architectures; octopuses with distributed neural processing in their arms, birds with a nucleated brain structure called the pallium which differs from the human cortex but has similar functional structure, even bumblebees are thought to possess some form of consciousness with far fewer neuron counts than humans. These examples exhibit coarse-grained functional similarities with the human brain—but differ substantially at the level of individual neurons.
If sub-neuronal dynamics determined presence or absence of consciousness we’d expect minor perturbations to erase it. Instead we’re able to lesion large brain regions whilst maintaining consciousness. You also preserve consciousness when small sub-neuronal changes are applied to every neuron such as when someone takes drugs like alcohol or caffeine. Fever also alters reaction rates and dynamics in every neuron across the brain. This robustness indicates that presence or absence of consciousness turns on coarse-grained functional dynamics rather than sub-neuronal dynamics.
I will point to what I (and others) have commented on or posted about before, and I shall hope that’s an adequate answer to your question.
I have written:
I have also written:
TAG has written:
Andesoldes has written:
I have written about related topics before.
S was meant to represent you in this dialogue.