Lead Data Scientist at Quantium.
PhD in Theoretical Physics / Cosmology.
Views my own not my employers.
Lead Data Scientist at Quantium.
PhD in Theoretical Physics / Cosmology.
Views my own not my employers.
You linked my post Beyond the Zombie Argument as evidence of people on LessWrong purportedly arguing against Chalmers which is a pretty significant misreading of my article.
I’m sympathetic to Chalmers’ property dualism and the article I wrote argues for Russellian Monism using an argument from Chalmers’ own 1996 work The Conscious Mind. The position I advocate in that post is arguably close to modern Chalmers’ actual position.
Could you either note this in a footnote in your post or remove the link to my post? I don’t want to be associated with the anti-Chalmers strawmans and strongly reject the implication that my post contributes to them.
Anecdotally, I have ‘past chats’ turned off and have found Sonnet 4.5 is almost never sycophantic on the first response but can sometimes become more sycophantic over multiple conversation turns. Typically this is when it makes a claim and I push back or question it (‘You’re right to push back there, I was too quick in my assessment’)
I wonder if this is related to having ‘past chats’ turned on as the context window gets filled with examples (or summaries of examples) where the user is questioning it and pushing back?
On standard (non eliminative) physicalism, zombies cannot be conceived without contradiction , because physicalism holds that consciousness is entirely physical, and a physical duplicate is a duplicate simpliciter.
This isn’t correct. The standard non-eliminative (type B) physicalist stance is to grant that zombies are conceivable a priori but deny the move to metaphysical possibility a posteriori. They’d say that physical brain states are identical to phenomena but we only find this a posteriori (analogous to water = H20 or heat = molecular motion.) You might find this view unsatisfying (as I do) but there are plenty of philosophers who take the line (Loar, Papineau, Tye etc..) and it’s not contradictory.
The physicalist move to deny zombie conceivability is eliminativist (type A) and is taken by e.g. Dennett, Dretske, Lewis etc..
On standard physicalism zombies would be conceivable because physics only captures the functional/relational properties between things, but this misses the intrinsic properties underlying these relations which are phenomenal.
On Russelian Monism, zombies are not conceivable because if you duplicate the physics you’re also duplicating the intrinsic, categorical properties and these are phenomenal (or necessarily give rise to phenomena.)
I could also imagine other flavours of Monism (which might be better labelled as property dualism?) for which the intrinsic categorical properties are contingent rather than necessary. On this view, zombies would also be conceivable.
I would tentatively lean towards regular Russellian Monism (I.e. zombies are inconceivable which is what I crudely meant by saying the zombie argument isn’t correct.)
Look, I appreciate the pushback, but I think you’re pressing a point which is somewhat tangential and not load-bearing for my position.
I agree that zombies have no mental states so, by definition, they can’t “believe” anything.
The point is, when you say “I know I’m conscious” you think you’re appealing to your direct phenomenal experience. Fine. But the zombie produces the exact same utterance, not by appealing to its phenomenal experience but through a purely physical/functional process which is a duplicate process to the one running in your brain. In this case, the thing which is doing the causal work to produce the utterances must be the physical/functional profile of their brain, not the phenomena itself.
So if the zombie argument is correct, you think you’re appealing to the phenomenal aspect of consciousness to determine the truth of your consciousness but you’re actually using the physical/functional profile of your brain. Hence my rhetorical point at the start of the article; if the zombie argument is correct then how do you know you’re not a zombie? The solution is that the zombie argument isn’t correct.
In the article, I also propose Russelian monism which takes the phenomenal aspect of consciousness seriously. In this way, you’d know the truth of your consciousness by introspecting because you’d have direct access to it. So again, the point you’re pressing is actually correct—you would indeed know that you’re not a zombie because you have access to your phenomenal consciousness.
A program consisting of print(“I know that I’m not a zombie since I have consciousness”) etc does the same thing.
No it doesn’t. The functional/physical profile of a print statement isn’t similar to the human brain. I’m also not sure why this point is relevant.
Not in the sense of the kind of facts that physics deals with.
Agreed. I mention this point in the article. Physics as it is currently construed doesn’t deal with the intrinsic categorical facts entailed by monism.
Thanks for posting the interesting thoughts around Dual Aspect Theory! I’m sympathetic to the viewpoint and it seems similar to what I’m gesturing at in the post. I’ll definitely be sure to research it further offline.
I know that I’m not a zombie since I have consciousness
Yes, but this is exactly what a zombie would say. Sure, in your case you presumably have direct access to your conscious experience that a zombie doesn’t have, but the rhetorical point I’m making in the post is that a zombie would believe it has phenomenal consciousness with the same conviction you have and when asked to justify it’s conviction it would point to the same things you do.
While I think reference problems do defeat specific arguments a computational-functionalist might want to make, I think my simulated upload’s references can be reoriented with only a little work. I do not yet see the argument for why highly capable self-preservation should take particularly long for AIs to develop.
I think you’re spot on with this. If you gave an AI system signals tied to e.g. CPU temperature, battery health etc… and train it with objectives that make those variables matter it will “care” about them in the same causal-role functional sense as the sim cares about simulated temperature.
This is a consequence of teleosemantics (which I can see is a topic you’ve written a lot about!)
The idea that advertising needs to be strongly persuasive to work is a deeply embedded myth based on a misunderstanding of consumer dynamics. It instead works as a kind of ‘nudge’ for consumers in a particular direction.
In practice, most consumers are not 100% loyal to a particular brand so they don’t need to be strongly persuaded to move to a different brand. They typically have a repertoire of safe products that they’re cycling through based on which price promotions are available that week etc.. the goal is to ‘nudge’ them to buy your product somewhat more often within that repertoire, reinforce your products place in the repertoire and potentially get customers to trial it in their repertoire.
See the paper here and the relevant quote which puts it much more eloquently than I can:
There is instead scope for advertising to
(1) reinforce your brand’s customers’ existing propensities to buy it as one of several,
(2) ‘nudge’ them to perhaps buy it somewhat more often, and
(3) get other consumers perhaps to add your brand as an extra or substitute brand to their existing brand repertoire (first usually on a ‘trial’ basis - ‘I might try that’ - rather than already strongly convinced or converted)
Perhaps this is technically tapping into human norms like “don’t randomly bring up poo in conversation” but if so, that’s unbelievably vague.
I think this explanation is likely correct on some level.
I made a post here which goes into more detail but the core idea is that there’s no “clean” separation between normative domains like aesthetic, moral and social etc… and the model needs to learn about all of them through a single loss function so everything gets tangled up.
As a clarification, I’m working with the following map:
Abstract functionalism (or computational functionalism) - the idea that consciousness is equivalent to computations or abstractly instantiated functions.
Physical functionalism (or causal-role functionalism) - the idea that consciousness is equivalent to physically instantiated functions at a relevant level of abstraction.
I agree with everything you’ve written against 1) in this comment and the other comment so will focus on defending 2).
If I understand the crux of your challenge to 2), you’re essentially saying that once we admit physical instantiation matters (e.g. cosmic rays can affect computations, steel vs birds wings have different energy requirements) then we’re on a slippery slope because each physical difference we admit further constrains what counts as the “same function” until we’re potentially only left with the exact physical system itself. Is this an accurate gloss of your challenge?
Assuming it is, I have a couple of responses:
I actually agree with this to an extent. There will always be some important physical differences between states unless they’re literally physically identical at a token level. The important thing is to figure out which level of abstraction is relevant for the particular “thing” we’re trying to pin down. We shouldn’t commit ourselves to insisting that systems which are not physically identical can’t be grouped in a meaningful way.
On my view, we can’t need an exact physical duplicate to reflect presence/absence of consciousness because consciousness is so remarkably robust. The presence of consciousness persists over multiple time-steps in which all manner of noise, thermal fluctuations and neural plasticity occur. What changes is the content/character of consciousness—but consciousness persists because of robust higher-level patterns not because of exact microphysical configurations.
And maybe, just maybe, you need to consider what the physical substrate actually does instead of writing down imperfect abstract mathematical approximations of it.
Again, I agree that not every physical substrate can support every function (I gave the example of combustion not being supported in steel above.) If the physical substrate prevents certain causal relations from occurring then this is a perfectly valid reason for it not to support consciousness. For example, I could imagine that it’s physically impossible to build embodied robot AI systems which pass behavioural tests for consciousness because the energy constraints don’t permit it or whatever. My point is that in the event where such a system is physically possible then it is conscious.
To determine if we actually converge or if there’s a fundamental difference in our views: Would you agree that if it’s possible in principle to build a silicon replica of a brain at whatever the relevant level of abstraction for consciousness is (whether coarse-grained functional level, neuron-level, sub-neuron level or whatever) then the silicon replica would actually be conscious?
If you agree here, or if you insist that such a replica might not be physically possible to build then I think our views converge. If you disagree then I think we have a fundamental difference about what constitutes consciousness.
I think the physical functionalist could go either way on whether a physically embodied robot wouldn’t be conscious.
Just clarifying this. A physical functionalist could coherently maintain that it’s not possible to build an embodied AI robot because physics doesn’t allow it. Similar to how a wooden rod can burn but a steel rod can’t because of the physics. But assuming that it’s physically possible to build an embodied AI system which passes behavioural tests of consciousness e.g. self-recognition, cross-modal binding, flexible problem solving etc.. then the physical functionalist would maintain that the system is conscious.
I think looking at how neurons actually work would probably resolve the disagreement between my inner A and S. Like, I do think that if we knew that the brain’s functions don’t depend on sub-neuron movements, then the neuron-replacement argument would just work
Out of interest, do you or @sunwillrise have any arguments or intuitions that the presence or absence of consciousness turns on sub-neuronal dynamics?
Consciousness appears across radically different neural architectures; octopuses with distributed neural processing in their arms, birds with a nucleated brain structure called the pallium which differs from the human cortex but has similar functional structure, even bumblebees are thought to possess some form of consciousness with far fewer neuron counts than humans. These examples exhibit coarse-grained functional similarities with the human brain—but differ substantially at the level of individual neurons.
If sub-neuronal dynamics determined presence or absence of consciousness we’d expect minor perturbations to erase it. Instead we’re able to lesion large brain regions whilst maintaining consciousness. You also preserve consciousness when small sub-neuronal changes are applied to every neuron such as when someone takes drugs like alcohol or caffeine. Fever also alters reaction rates and dynamics in every neuron across the brain. This robustness indicates that presence or absence of consciousness turns on coarse-grained functional dynamics rather than sub-neuronal dynamics.
I found this post pretty helpful to crystallise two distinct views that often get conflated. I’ll call them abstract functionalism and physical functionalism. The key confusion comes from treating these as the same view.
When we talk about a function it can be instantiated in two ways: abstractly and physically. On this view there’s a meaningful difference between an abstract instantiation of a function, such as a disembodied truth table representing a NAND gate and a physical instantiation of a NAND gate e.g. on a circuit board with wires and voltages etc..
When S argues:
The causal graph of a bat hitting a ball might describe momentum and position, but if you re-create that graph elsewhere (e.g. on a computer or some scaled) it won’t have that momentum or velocity
They’re right that abstract function leaves out some critical physical properties. A simulation of momentum transfer doesn’t actually transfer momentum. But this doesn’t defeat functionalism it just shows that abstract instantiation of the function is not enough.
For example, consider a steel wing and a birds wing generating lift. The steel wing has vastly different kinetic energy requirements but the aerodynamics still works because steel can support the function. Contrast this with combustion—steel can’t burn like wood because it lacks the right chemical energy profile.
When A asks:
Do you claim that, if I started replacing neurons in your brain with stuff that is functionally the same, wrt. the causal graph of consciousness, you’d feel no difference? You’d still be conscious in the same way?
They’re appealing to the intuition that physically instantiated functional replicas of neurons would preserve consciousness.
The distinction matters because people often use the “simulations lack physical properties” argument to dismiss abstract functionalism and then tie themselves in knots trying to understand whether a physically embodied AI robot system could be conscious when they haven’t defeated physical functionalism.
The most coherent formulation that I’ve seen is from Terence Cuneo’s The Normative Web. The basic idea is that moral norms have the same ontological status as epistemic norms.
Unpacking this a little, when we’re talking about epistemic norms we’re making a claim about what someone ought to believe. For example:
You ought to believe the Theory of General Relativity is true.
You ought not to believe that there is a dragon in your garage if there is no evidence.
When we say ought in the sentences above we don’t mean it in some empty sense. It’s not a matter of opinion whether you ought to form beliefs according to good epistemic practices. The statements have some normative bite to them. You really ought to form beliefs according to good epistemic practices.
Similarly, you could cast moral norms in a similar vein. For example:
You ought to behave in a way which promotes wellbeing
You ought not to behave in a way which causes gratuitous suffering.
The moral statements above have the same structure as the epistemic statements. When I say you really ought not to believe epistemically unjustified thing X this is the same as saying you really ought not to behave in morally unjustified way Y.
There are some objections to the above:
You could argue that epistemic norms reliably track truth whereas moral norms reliably track something else like wellbeing which you need an additional evaluative function to tell you is “good.”
The point is that you also technically need this for epistemic norms. Some really obtuse person could always come along and ask you to justify why truth-seeking is “good” and you’d have to rely on some external evaluation that seeking truth is good because XYZ.
The standard formulation of epistemic and moral norms is “non-naturalist” in the sense that these norms cannot be deduced from natural facts. This is a bit irksome if we have a naturalist worldview and want to avoid positing any “spooky” entities.
Ultimately I’m pretty skeptical that we need these non-natural facts to ground normative facts. If what we mean by really ought in the above are that there are non-natural normative facts that sit over-and-above the natural facts then maybe the normative statements above don’t really have any “bite” to them. As noted in some of the other comments, the word really is doing a lot of heavy lifting in all of this.
The quotes you referenced are in the first paragraph of the post which explicitly frame my starting point before engaging deeply with Chalmers’ ideas. The post ends up endorsing a position that Chalmers is sympathetic to and uses his own arguments. It doesn’t argue against him.
To clarify this, I’m asking for a footnote in your own post, or alternatively, just to remove the link to my post and find a link which more clearly supports your point.