Does functionalism imply dualism?
This post follows on from Personal research update, and is followed by State your physical explanation of experienced color.
In a recent post, I claimed that functionalism about consciousness implies dualism. Since most functionalists think their philosophy is an alternative to dualism, I’d better present an argument.
But before I go further, I’ll link to orthonormal’s series on dissolving the problem of “Mary’s Room”: Seeing Red: Dissolving Mary’s Room and Qualia, A Study of Scarlet: The Conscious Mental Graph, Nature: Red in Truth, and Qualia. Mary’s Room is one of many thought experiments bandied about by philosophers in their attempts to say whether or not colors (and other qualia) are a problem for materialism, and orthonormal presents a computational attempt to get around the problem which is a good representative of the functionalist style of thought. I won’t have anything to say about those articles at this stage (maybe in comments), but they can serve as an example of what I’m talking about.
Now, though it may antagonize some people, I think it is best to start off by stating my position plainly and bluntly, rather than starting with a neutral discussion of what functionalism is and how it works, and then seeking to work my way from there to the unpopular conclusion. I will stick to the example of color to make my points—apologies to blind and colorblind readers.
My fundamental thesis is that color manifestly does exist—there are such things as shades of green, shades of red, etc—and that it manifestly does not exist in any standard sort of physical ontology. In an arrangement of point particles in space, there are no shades of green present. This is obviously true, and it’s equally obvious for more complicated ontologies like fields, geometries, wavefunction multiverses, and so on. It’s even part of the history of physics; even Galileo distinguished between primary qualities like location and shape, and secondary qualities like color. Primary qualities are out there and objectively present in the external world, secondary qualities are only in us, and physics will only concern itself with primary qualities. The ontological world of physical theory is colorless. (We may call light of a certain wavelength green light or red light, but that is because it produces an experience of seeing green or seeing red, not because the light itself is green or red in the original sense of those words.) And what has happened due to the progress of the natural sciences is that we now say that experiences are in brains, and brains are made of atoms, and atoms are described by a physics which does not contain color. So the secondary qualities have vanished entirely from this picture of the world; there is no opportunity for them to exist within us, because we are made of exactly the same stuff as the external world.
Yet the “secondary qualities” are there. They’re all around us, in every experience. It really is this simple: colors exist in reality, they don’t exist in theory, therefore the theory needs to be augmented or it needs to be changed. Dualism is an augmentation. My speculations about quantum monads are supposed to pave the way for a change. But I won’t talk about that option here. Instead, I will try to talk about theories of consciousness which are meant to be compatible with physicalism—functionalism is one such theory.
Such a theory will necessarily present a candidate, however vague, for the physical correlate of an experience of color. One can then say that color exists without having to add anything to physics, because the color just is the proposed physical correlate. This doesn’t work because the situation hasn’t changed. If all you have are point particles whose only property is location, then individual particles do not have the property of being colored, nor do they have that property in conjunction. Identifying a physical correlate simply picks out a particular set of particles and says “there’s your experience of color”. But there’s still nothing there that is green or red. You may accustom yourself to thinking of a particular material event, a particular rearrangement of atoms in space, as being the color, but that’s just the power of habitual association at work. You are introducing into your concept of the event a property that is not inherently present in it.
It may be that one way people manage to avoid noticing this, is by an incomplete chain of thought. I might say: none of the objects in your physical theory are green. The happy materialist might say: but those aren’t the things which are truly green in the sense you care about; the things which are green are parts of experiences, not the external objects. I say: fine. But experiences have to exist, right? And you say that physics is everything. So that must mean that experiences are some sort of physical object, and so it will be just as impossible for them to be truly green, given the ontological primitives we have to work with. But for some reason, this further deduction isn’t made. Instead, it is accepted that objects in physical space aren’t really green, but the objects of experience exist in some other “space”, the space of subjective experience, and… it isn’t explicitly said that objects there can be truly green, but somehow this difference between physical space and subjective space seems to help people be dualists without actually noticing it.
It is true that color exists in this context—a subjective space. Color always exists as part of an “experience”. But physical ontology doesn’t contain subjective space or conscious experience any more than it does contain color. What it can contain, are state machines which are structurally isomorphic to these things. So here we can finally identify how a functionalist theory of consciousness works psychologically: You single out some state machines in your physical description of the brain (like the networks in orthonormal’s sequence of posts); in your imagination, you associate consciousness with certain states of such state machines, on the basis of structural isomorphism; and now you say, conscious states are those physical states. Subjective space is some neural topographic map, the subjectively experienced body is the sensorimotor homunculus, and so forth.
But if we stick to any standard notion of physical theory, all those brain parts still don’t have any of the properties they need. There’s no color there, there’s no other space there, there’s no observing agent. It’s all just large numbers of atoms in motion. No-one is home and nothing is happening to them.
Clearly it is some sort of progress to have discovered, in one’s physical picture of the world, the possibility of entities which are roughly isomorphic to experiences, colors, etc. But they are still not the same thing. Most of the modern turmoil of ideas about consciousness in philosophy and science is due to this gap—attempts to deny it, attempts to do without noticing it, attempts to force people to notice it. orthonormal’s sequence, for example, seems to be an attempt to exhibit a cognitive model for experiences and behaviors that you would expect if color exists, without having to suppose that color actually exists. If we were talking about a theoretical construct, this would be fine. We are under no obligation to believe that phlogiston exists, only to explain why people once talked about it.
But to extend this attitude to something that most of us are directly experiencing in almost every waking moment, is … how can I put this? It’s really something. I’d call it an act of intellectual desperation, except that people don’t seem to feel desperate when they do it. They are just patiently explaining, recapitulating and elaborating, some “aha” moment they had back in their past, when functionalism made sense to them. My thesis is certainly that this sense of insight, of having dissolved the problem, is an illusion. The genuineness of the isomorphism between conscious state and coarse-grained physical state, and the work of several generations of materialist thinkers to develop ways of speaking which smoothly promote this isomorphism to an identity, combine to provide the sense that no problem remains to be solved. But all you have to do is attend for a moment to experience itself, and then to compare that to the picture of billions of colorless atoms in intricate motion through space, to realize that this is still dualism.
I promised not to promote the monads, but I will say this. The way to avoid dualism is to first understand consciousness as it is in itself, without the presupposition of materialism. Observe the structure of its states and the dynamics of its passage. That is what phenomenology is about. Then, sketch out an ontology of what you have observed. It doesn’t have to contain everything in infinite detail, it can overlook some features. But I would say that at a minimum it needs to contain the triad of subject-object-aspect (which appears under various names in the history of philosophy). There are objects of awareness, they are being experienced within a common subjective space, and they are experienced in a certain aspect. Any theory of reality, whether or not it is materialist, must contain such an entity in order to be true.
The basic entity here is the experiencing subject. Conscious states are its states. And now we can begin to tackle the ontological status of state machines, as a candidate for the ontological category to which conscious beings belong.
State machines are abstracted descriptions. We say there’s a thing, it has a set of possible states; here are the allowed transitions between them, and the conditions under which those transitions occur. Specify all that and we have specified a state machine. We don’t care about why those are the states or why the transitions occur; those are irrelevant details.
A very simple state machine might be denoted by the state transition network “1<->2”. There’s a state labeled 1 and another state labeled 2. If the machine is in state 1, it proceeds to state 2, and the reverse is also true. This state machine is realized wherever you have something that oscillates between two states without stopping in either. First the earth is close to the sun, then it is far from the sun, then it is close again… The Earth in its orbit instantiates the state machine “1<->2″. I get involved with Less Wrong, then I quit for a while, then I come back… My Internet habits also instantiate the state machine “1<->2”.
A computer program is exactly like this, a state machine of great complexity (and usually its state transition rules contain some dependence on external conditions, like user input) which has been physically instantiated for use. But one cannot claim that its states have any intrinsic meaning, any more than I can claim that the state 1 in the oscillating state machine is intrinsically about the earth being close to the sun. This is not true, even if I write down the state transition network in the form “CloseToTheSun<->FarFromTheSun”.
This is another ontological deficiency of functionalism. Mental states have meanings, thoughts are always about something, and what they are about is not the result of convention or of the needs of external users. This is yet another clue that the ontological status of conscious states is special, that their “substance” matters to what they are. Of course, this is a challenge to the philosophy which says that a detailed enough simulation of a brain will create a conscious person, regardless of the computational substrate. The only reason people believe this, is because they believe the brain itself is not a special substrate. But this is a judgment made on the basis of science that is still at a highly incomplete stage, and certainly I expect science to tell us something different by the time it’s finished with the brain. The ontological problems of functionalism provide a strong apriori reason for this expectation.
What is more challenging is to form a conception of the elementary parts and relations that could form the basis of an alternative ontology. But we have to do this, and the impetus has to come from a phenomenological ontology of consciousness that is as precise as possible. Fortunately, a great start was made on this about 100 years ago, in the heyday of phenomenology as a philosophical movement.
A conscious mind is a state machine, in the sense that it has states and transitions between them. The states also have structure, because conscious experiences do have parts. But the ontological ties that combine those parts into the whole are poorly apprehended by our current concepts. When we try to reduce them to nothing but causal coupling or to the proximity in space of presumed physical correlates of those parts, we are, I believe, getting it wrong. Clearly cause and effect operates in the realm of consciousness, but it will take great care to state precisely and correctly the nature of the things which are interacting and the ways in which they do so. Consider the ability to tell apart different shades of color. It’s not just that the colors are there; we know that they are there, and we are able to tell them apart. This implies a certain amount of causal structure. But the perilous step is to focus only on that causal structure, detach it from considerations of how things appear to be in themselves, and instead say “state machine, neurons doing computations, details interesting but not crucial to my understanding of reality”. Somehow, in trying to understand conscious cognition, we must remain in touch with the ontology of consciousness as partially revealed in consciousness itself. The things which do the conscious computing must be things with the properties that we see in front of us, the properties of the objects of experience, such as color.
You know, color—authentic original color—has been banished from physical ontology for so long, that it sounds a little mad to say that there might be a physical entity which is actually green. But there has to be such an entity, whether or not you call it physical. Such an entity will always be embedded in a larger conscious experience, and that conscious experience will be embedded in a conscious being, like you. So we have plenty of clues to the true ontology; the clues are right in front of us; we’re subjectively made of these clues. And we will not truly figure things out, unless we remain insistent that these inconvenient realities are in fact real.
Summary: “My feelings, therefore dualism.”
Do animals have qualia? If yes, what evolutionary advantage do they serve in animals? If no, how did this complex structure (of quantum microtubules or whatever else) suddenly appear? Is qualia-possession binary? Did some human ancestor with no qualia give birth to a child with qualia?
More generally, is there a plausible causal history of human qualia?
Since qualia don’t have any influence upon the external world and qualia are not caused by the physical world, then qualia must have a causal history that is independent of the physical world. The best explanation is that they are created by a mental substance which all of qualia-possessing beings have, the best explanation of which is God’s creative action.
Qualia don’t serve any evolutionary purpose. They don’t have any causal influence on the external world at all. Which is also why we may never know whether animals have qualia (unless someone like God reveals that information to us).
if you disbelieve in cruelty to animals, you probably beleive they do.
High calorie food tastes sweet, potential poisons taste bitter, etc.
Micortubules are not unqiuely human.
Finding yourself to be a conscious being is anthropically necessary. If the universe contains quantum-computational conscious beings and classical-computational zombies, and only the first are conscious, then you can only ever be the first kind of being, and you can only ever find that you had an evolutionary history that managed to produce such beings as yourself. (ETA: Also, you can only find yourself to exist in a universe where consciousness can exist, no matter how exotic an ontology that requires.)
Obviously I believe in the possibility of unconscious simulations of conscious beings. All it should require is implementing a conscious state machine on a distributed base. But I have no idea how likely it is that evolution should produce something like that. Consciousness does have survival value, and given that I take genuine conscious states to be something relatively fundamental, some fairly fundamental laws are probably implicated in the details of its internal causality. I simply don’t know whether a naturally evolved unconscious intelligence would be likely to have a causal architecture isomorphic to that of a conscious intelligence, or whether it would be more likely to implement useful functions like self-monitoring in a computationally dissimilar way.
What I say about the internal causality of genuine consciousness may sound mysterious, so I will try to give an example; I emphasize this is not even speculation, it’s just an ontology of consciousness which allows me to make a point.
One of the basic features of conscious states is intentionality—they’re about something. So let us say that a typical conscious state contains two sorts of relations—“being aware of” a quale, and “paying attention to” a quale. Unreflective consciousness is all awareness and no attention, while a reflective state of consciousness will consist of attending to certain qualia, amid a larger background of qualia which are just at the level of awareness.
Possible states of consciousness would be specified by listing the qualia and by listing whether the subject is attending to them or just aware of them. (The whole idea is that when attending, you’re aware that you are aware.) Now we have a state space, we can talk about dynamics. There will be a “physical law” governing transitions in the conscious state, whereby the next state after the current one is a function of the current state and of various external conditions.
An example of a transition that might be of interest, is the transition from the state “aware of A, aware of B, aware of C...” to the state “attending to A, aware of B, aware of C...” What are the conditions under which we start attending to something—the conditions under which we become aware of being aware of something? In this hypothetical ontology, there would be a fundamental law describing the exact conditions which cause such a transition. We can go further, and think about embedding this model of mind, into a formal ontology of monads whose mathematical states are, say, drawn from Hilbert spaces with nested graded subspaces of varying dimensionality, and which works to reproduce quantum mechanics in some limit. We might be able to represent the recursive nature of iterated reflection (being aware of being aware of being aware of A) by utilizing this subspace structure.
We are then to think of the world as consisting mostly of “monads” or tensor factors drawn from the subspaces of smallest dimensionality, but sometimes they evolve into states of arbitrarily high dimensionality, something which corresponds to the formation of entangled states in conventional quantum mechanics. But this is all just mathematical formalism, and we are to understand that the genuine ontology of the complex monadic states is this business about a subject perceiving a set of qualia under a mixture of the two aspects (awareness versus attention), and that the dynamical laws of nature that pertain to monads in reflective states are actually statements of the form “A quale jumps from awareness level to attention level if… [some psycho-phenomenological condition is met]”.
Furthermore, it would be possible to simulate complex individual monads with appropriately organized clusters of simple monads, but ontologically you wouldn’t actually have the complex states of awareness and attention being present, you would just have lots of simple monads being used like dots in a painting or bits in a computer.
I really do expect that the truth about how consciousness works is going to sound this weird and this concrete, even if this specific fancy is way off in its details.
Sorry, I think I was unclear. When I was wondering about the causal history of human qualia, I didn’t mean the causal history of a particular quale in a human, but rather the causal history of why humans have qualia.
I don’t think anthropics are a sufficient answer to that question; if there exist no plausible causal histories of humans with qualia, then either the humans or the qualia have to go.
If zombies are possible, why can’t this “you” you are talking to be a zombie? Zombies should be capable of reasoning correctly in the sleeping beauty problem, or about waking up in blue or red rooms, etc.
If you make a zombie clone of a human (not necessarily a perfect copy, merely one that’s similar enough that it can’t tell if it’s a zombie or not), and have them both play a game where they are shown a button and have the choice to press it or not, if neither presses it they get $1.000, if both press it they get nothing, and if only the human presses it they get $1.000.000 (in all cases, the money is split between the copies). In such a scenario, you better hope that the zombie doesn’t follow your advice and reason that it has to be a human.
Sure, colors exist in reality, but they are patterns of neuronal excitations, not molecules. I don’t see how this belief makes me a dualist. Actually this belief killed my belief in dualism.
Maybe I misread you, but I hear your post as saying, “Colors must exist in the territory, not just the map!” And I can’t see why you believe that so strongly.
PS I greatly prefer this post to your previous one.
How do you explain the fact that the state of the mind known as “seeing color” has the property that it cannot be accessed/observed by anyone except its owner (I hope you know what I mean by the “owner”), while the neuronal excitations can be observed by anyone in principle? Doesn’t it mean that colors are not neuronal excitations?
Thank you for producing a perfect example of what I called the “incomplete chain of thought”! What I called “subjective space” and “physical space”, you have called “map” and “territory”. This thing you call a “map”, conscious experience, is part of the “territory”—part of reality—which itself is supposed to be coextensive with physics. So locating colors on the map doesn’t get them off the territory. If everything real is made of physics, you still must either explain how certain patterns of neuronal excitations are actually green, or you must assert that nothing is actually green at any level of reality.
Certain patterns of neuronal excitations feel like green from the inside. I don’t understand this well enough to write a conscious computer program, but neither does anyone else (thank Bayes). I do believe that such a computer program can be written, and if that can be shown to be impossible, I will reconsider my position here (conversely, it seems that you must hold that no such computer program can be written).
It may happen that “nothing is actually green at any level of reality”, and in that case, I still say that certain patterns of neuronal excitations feel like green from the inside, even if it’s an illusion.
“Certain patterns of neuronal excitations feel like green from the inside.”
If patterns are not a fundamental part of reality, but merely the mind’s mapping of an uncaring territory, why should patterns feel anything from the inside, as opposed to being felt merely from the outside?
By saying that patterns feels something from the inside, you seem to claim that patterns are a part of reality that isn’t merely the sum of their parts.
The patterns are an organization of reality that has higher-level meaning to our minds. The meaning, as with everything, is in the interpretation, not the physical atoms.
But that’s just saying that lavalamp has a unique responsibility to solve the hard problem—everyone already knows it needs to be solved, and nobody knows how to do it. It doesn’t undermine functionalism in particular. It’s an open problem; we could just as well say that you must explain how [your preferred explanation of consciousness] is actually green.
Thank you. I’ve been typing and retyping trying to say that. I just gave up and refreshed and you’d done it already!
I guess I’m a little too tired.
Upvoted for stroking my ego.
“And yet there are these regular causal connections. These are causal connections (in both directions) between kinds of brain event and kinds of mental event, so detailed and specific that it is most improbable that they would occur without an explanation; yet it is immensely improbable that there could be a scientific explanation of the connections. Mind–brain connections are too ‘odd’ for science to explain; they cannot be consequences of a more fundamental scientific theory, and there are simply too many diverse connections to constitute laws. But once again there is available a personal explanation: God being omnipotent, is able to join souls to bodies. He can cause there to be the particular brain-event–mental-event connections that there are. He can do this by causing molecules when formed into brains to have powers to produce mental events in souls to which they are connected, and the liabilities to execute the intentions of such connected souls (new powers and liabilities not deriving from the ordinary ones, which chemistry analyses). And he can make the souls in the first place and choose to which brain (and so body) each soul is to be connected when foetal brain events require a soul to be connected to the brain. God has good reason to cause the existence of souls and join them to bodies, in the goodness (on which I commented in Chapter 6) of the existence of humanly free agents who would need to have bodies through which to have enjoyable sensations, form largely true beliefs about the world, and form their own purposes in the light of these beliefs, which would make a difference to the world. I argued that there was a significant probability that God would make such creatures. Their existence involves the existence of regular causal connections between mental events and events in their bodies. Given that humans are humanly free agents, it involves regular causal connections between mental events and events in human brains. We cannot make a difference to the world if, each time we try to move our leg, some different effect is caused in the brain and thereby in the body—one time the arm moves, one time we find ourselves sneezing, and so on. Likewise, if we are to discriminate between one object and another, they have to look (feel, etc.) different, and so there has to be a regular causal connection between the brain events caused by objects of each kind and the mental visual impressions of them. And, if we are to have the awesome power of reproduction, there have to be regular connections between our sexual acts, the foetus to which they give rise, and some soul or other linked to that foetus. God has reason to set up all these connections. He may have a reason to make this brain state cause a red sensation and that one to cause a blue sensation rather than the other way round, but, if there is no particular reason why one connection is better than a rival one, God has a reason by a ‘mental toss-up’ to produce one-or-other connection. He may have a reason to join this soul to this particular body, but again, if there is no reason for joining one soul to one body rather than to a different body, he has reason by a ‘mental toss-up’ to produce one-or-other connection—that is, to make it a chance matter which connection holds. So then, because we have every reason to believe that there can be no scientific theory and so scientific laws correlating brain states with souls and their states, we have every reason to believe that the causal connections that exist between them do not have a scientific explanation in terms of the properties of brain states; they are additional causal connections independent of the set of scientific laws governing the physical world. Nothing about the physical world makes it in the very least probable that there would be these connections. Let e be the existence of souls with mental states connected to brain states in the ways in which we have been analysing; k be the premisses of the arguments of the previous chapter—that there is a law-governed physical world of the type analysed in Chapter 8 with laws and boundary conditions tuned so as to allow the existence of human bodies; and let h as before be the hypothesis of theism. Then P(e | ~h) is very low. But, for all the reasons analysed in Chapter 6, a God has very good reason for creating humans (and good reason to create animals); hence P(e | h) has a moderate value. Hence the argument from consciousness is a good [inductive] argument for the existence of God.”
R. Swinburne, The Existence of God, Second Edition, ch. 9, pp. 209-211.
This is interesting, true, and really complicates any quest to maintain an accurate map.
Upvoted (the OP too). I think some of your interlocutors may be thinking past you here, in the sense that they have dismissed your central point as a triviality. But there are fundamental differences between interactions of particles in the open universe, the state changes that particle interactions cause in our sensory machinery, and what it feels like to be a brain having an experience. The suggestion that the experience of green might be illusory fails to consider that it is something occurring in a physical brain. In this sense, the most dismissive thing we might say about any quale is that it doesn’t have the meaning we readily assign to it, but that’s different from a claim of nonexistence.
I’m not philosophically sophisticated enough to judge whether this observation implies dualism. I think perhaps we’d find a lot more common ground if we discussed our expectations rather than our definitions (especially given the theological baggage that the term dualism carries).
I agree that this “map” is part of the “territory”, and that’s because the map that we’re trying to construct in philosophy—an ontology—is a map claiming to cover everything in the universe including maps.
This is a ‘why’ question, not a ‘how’ question, and though some ‘why’ questions may not be amenable to deeper explanations, ‘how’ questions are always solvable by science. Explaining how neuronal patterns generate systems with subjective experiences of green is a straightforward, though complex, scientific problem. One day we may understand this so well that we could engineer quales on demand, or create new types of never before seen quales according to some transformation rules. However, explaining ‘why’ such arrangements of matter should possess such interiority or subjectivity is, I think at least based on everything we currently know, unanswerable.
I may be being slow here, but is there any way in which you’re not just restating the hard problem of consciousness here? And that problem is a problem for all the alternatives so far, whether dualistic or monistic, and not just for functionalism? Whether you put consciousness on high-level organisation in the brain, or on quantum physics, or on some second substance, you’re going to have to explain how consciousness happens. The only ones who avoid that duty are the ones who say that mental things are fundamental, and then I just roll my eyes all the way around. And I don’t think the fact that functionalists haven’t solved the hard problem necessarily makes them dualists. As you said, functionalists believe that “Subjective space is some neural topographic map, the subjectively experienced body is the sensorimotor homunculus, and so forth.” Whatever criticism you want to level against that position, it sure doesn’t seem dualistic.
There are levels to the ontological problem of consciousness. The first level is the level where you don’t even have anything in your ontology that can be identified with consciousness. You can’t get past that level until you admit that’s where you’re at. All standard nondualistic materialist theories of consciousness contain something which in the theory is called “consciousness”, but which can’t be the real thing, for the reasons discussed in this post.
Consider the problems faced instead by a dualistic theory which explicitly says that there is a “stream of consciousness” with all the properties of the real thing, existing in parallel with a physical world. Such a theory has well-known problems of causal redundancy and logical economy, but it doesn’t have this problem of nothing being actually green, does it? Actual green exists in the stream of consciousness, along with all the other problematic realities of consciousness. The physical world remains colorless, but it doesn’t matter because this is dualism and the mind is located alongside the physical world, not in it.
Another type of “theory” which doesn’t have the problem of not containing consciousness is metaphysical idealism, the idea that there’s nothing outside consciousness, and thus no physical world at all. It’s all a dream or a hallucination by a disembodied entity.
So different theories of consciousness face very different problems. There are theories which explicitly, by construction, contain consciousness. Then there are theories which contain something they call consciousness, but which doesn’t have the right properties to be the real thing. What I would like to see is a physical theory which contains consciousness, not because we dualistically add the real thing, but because it inherently already contains that sort of entity.
You constantly elide between the property of being green and the experience of something green. Which leads to the ancient mistake of saying that whatever constitutes your experience of something green must itself be green. Admittedly you put this enormous red herring in the mouth of your opponent, but it’s totally unwarranted nonetheless.
e.g.
You also then essentially just say “But qualia! Intentionality! They’re so real! There must be something more!”, i.e. the same argument dualists have been making since the dawn of time, and that any attempts to dissolve the question have failed, since
Furthermore, all the arguements you use are pretty much applicable across the board, and don’t particularly relate to functionalism, so I think it’s disingenuous of you to say that you’re arguing for “functionalism implies dualism” rather than simply “dualism is true”.
Downvoted.
JJC Smart responds to people who would conflate experiences of seeing things with the actual things which are being seen in his 1959 paper “Sensations and Brain Processes”. Here he’s talking about the experience of seeing a yellow-green after image, and responding to objections to his theory that experiences can be equivalent to mental states.
The theory he is defending in the paper is an identity theory where brain states are identical to mental states, but the point still holds for functionalist theories where mental states supervene on functional states.
So, for example, nothing is ever a bridge, because it’s all just a collection of atoms, and there are no little “bridge” labels on the atoms?
Let’s switch to the same thing, but with ethics. Can things be right or wrong without having little “right” and “wrong” tags on the atoms? Have you read Lukeprog’s metaethics sequence so far? Can things have the property “Manfred would call this right” and “Manfred would call this wrong” without having little “Mwctr” and “Mwctw” tags on the atoms?
The question of “dualism” isn’t even a real question. Science tells us that a certain wavelength of light will appear to us as green. But what really is the point of knowing that? Well, it gives us a set of instructions for how to make us experience green. But the instructions for how to produce the subjective experience are not themselves the experience. The notion that if we could just figure out how to make people experience green through some manipulation we will have learned something amazing is silly. We can already do that by showing a green flag or telling someone not to think of a green rabbit.
So if we do simulate a brain and it tells you it’s conscious and experiences green (through a camera), would you then agree that there’s no need for dualism?
I don’t think there’s a need for dualism anyway; there’s a need for a new physical ontology. But a simulation of a conscious brain should tell you that it’s conscious even if it’s not, or else it’s not an effective simulation.
So p-zombies are possible, and in humans, the physical processes (of the brain) are somehow “magically” correlated / isomorphic to mental phenomena, whereas this doesn’t happen in simulations, for what (unknown?) reasons?
The p-zombie theory holds that being able to conceive of something makes it possible; and because p-zombies are possible, therefore dualism. The tricky bit appears to be “conceive of” in a sense that implies possibility. Consider these statements:
I can conceive of 2+2=4 being true in conventional Peano arithmetic.
I can conceive of 2+2=5 being true in conventional Peano arithmetic.
I can conceive of P being equal to NP.
I can conceive of P not being equal to NP.
I can conceive of p-zombies, therefore dualism.
If I can conceive of p-zombies then dualism, which is a confused idea, therefore p-zombies is a confused idea by reductio ad absurdum.
With the second, I am claiming to “conceive of” something trivially false. I arguably haven’t conceived of anything actually possible; I’ve just shuffled some words together.
With the third and fourth, I’m claiming to have conceived of something no-one knows (though many suspect 3 is false and 4 is true). To what extent have I actually thought it through? At some point I will hit a contradiction with one of them, though no-one has yet. Both are “conceivable” in some sense; certainly that they’ve formed a sentence in their head that they can try out for its logical implications. But one of those statements is as wrong as 2+2=5 nevertheless.
When someone claims that p-zombies are a conceivable thing at all, and that they have conceived of them, this doesn’t actually say anything about the world or what is even possible; it just says they’ve formed a sentence in their head they think they can try out for its logical implications. But try telling them this. (I have, and haven’t managed a sufficiently robust form of 6. to be convincing.)
(I still consider the fundamental argument in favour of dualism is that its advocates really want it to be true, and that p-zombies is like creationism for smart people.)
I’ve just realised that the second zombie post in the sequences makes exactly the point I made above: the gap between “I don’t see a contradiction yet” and “this is logically possible” and what happens when you conflate the two (for instance, you might think p-zombies aren’t utterly stupid).
Lots of things don’t happen in simulations. Simulated planes don’t fly, and simulted explosions don’t destroy anything..
The fact that you’re saying that two things can be isomorphic or something close to it, and both say they’re conscious, even though one is and one isn’t — alternately, the claim that you can know things about your substrate just through introspection (why do you think you’re conscious if you think something closely analogous would say so and be wrong?) — seems analogous to saying that zombies can exist, though maybe not as clearly problematic. Does this make you worry?
-- shit I say on Facebook
I mostly care about this kinda stuff ’cuz I’m afraid of demons (knowingly passing along fake bills).
ETA: I’d like to hear an explanation for the downvotes, for my amusement.
I didn’t downvote it, but if I’d seen it at zero I might well have done, ’cos it makes no damn sense to me. Please unpack.
I realize that I have great difficulty interacting with you civilly on Less Wrong, so I’ll just note that I have read this post, I found nothing to make me reconsider my prior stance, and I trust other people to raise the objections here that I would if I were more level-headed.
I have no problems with you, even when you’re grumpy. And sometimes there isn’t enough time to be level-headed.
Thanks. I’m sorry for the earlier insults; it bothers me greatly that I got so snide about it.
For whatever reason, I have a bad reaction to things written by you, PJ Eby, Will Newsome, and Clippy, and the smartest thing for me to do is stay off of the relevant threads until I figure out why and what to do about it.