In discussions about consciousness I find myself repeating the same basic argument against the existence of qualia constantly. I don’t do this just to be annoying: It is just my experience that
1. People find consciousness really hard to think about and has been known to cause a lot of disagreements.
2. Personally Ithink that this particular argument dissolved perhaps 50% of all my confusion about the topic, and was one of the simplest, clearest arguments that I’ve ever seen.
I am not being original either. The argument is the same one that has been used in various forms across Illusionist/Eliminativist literature that I can find on the internet. Eliezer Yudkowsky used a version of it many years ago. Even David Chalmers, who is quite the formidable consciousness realist, admits in The Meta-Problem of Consciousness that the argument is the best one he can find against his position.
The argument is simply this:
If we are able to explain why you believe in, and talk about qualia without referring to qualia whatsoever in our explanation, then we should reject the existence of qualia as a hypothesis.
This is the standard debunking argument. It has a more general form which can be used to deny the existence of a lot of other non-reductive things: distinct personal identities, gods, spirits, libertarian free will, a mind-independent morality etc. In some sense it’s just an extended version of Occam’s razor, showing us that qualia don’t do anything in our physical theories, and thus can be rejected as things that actually exist out there in any sense.
To me this argument is very clear, and yet I find myself arguing it a lot. I am not sure how else to get people to see my side of it other than sending them a bunch of articles which more-or-less make the exact same argument but from different perspectives.
I think the human brain is built to have a blind spot on a lot of things, and consciousness is perhaps one of them. I think quite a bit how if humanity is not able to think clearly about this thing which we have spent many research years on, then it seems like there might be some other low hanging philosophical fruits still remaining.
Addendum: I am not saying I have consciousness figured out. However, I think it’s analogous to how atheists haven’t “got religion figured out” yet they have at the very least taken their first steps by actually rejecting religion. It’s not a full theory of religious belief, or even a theory at all. It’s just the first thing you do if you want to understand the subject. I roughly agree with Keith Frankish’s take on the matter.
If we are able to explain why you believe in, and talk about qualia without referring to qualia whatsoever in our explanation, then we should reject the existence of qualia as a hypothesis.
And I assume your claim is that we can explain why I believe in Qualia without referring to qualia?
I haven’t thought that hard about this and am open to that argument. But afaict your comments here so far haven’t actually addressed this question yet.
Edit: to be clear, I don’t really much why other people talk about qualia. I care why I perceive myself to experience things. If it’s an illusion, cool, but then why do I experience the illusion?
If belief is construed as some sort of representation which stands for external reality (as in the case of some correspondence theories of truth), then we can take the claim to be strong prediction of contemporary neuroscience. Ditto for whether we can explain why we talk about qualia.
It’s not that I could explain exactly why youin particular talk about qualia. It’s that we have an established paradigm for explaining it.
It’s similar in the respect that we have an established paradigm for explaining why people report being able to see color. We can model the eye, and the visual cortex, and we have some idea of what neurons do even though we lack the specific information about how the whole thing fits together. And we could imagine that in the limit of perfect neuroscience, we could synthesize this information to trace back the reason why you said a particular thing.
Since we do not have perfect neuroscience, the best analogy would be analyzing the ‘beliefs’ and predictions of an artificial neural network. If you asked me, “Why does this ANN predict that this image is a 5 with 98% probability” it would be difficult to say exactly why, even with full access to the neural network parameters.
However, we know that unless our conception of neural networks is completely incorrect, in principle we could trace exactly why the neural network made that judgement, including the exact steps that caused the neural network to have the parameters that it has in the first place. And we know that such an explanation requires only the components which make up the ANN, and not any conscious or phenomenal properties.
I can’t tell whether we’re arguing about the same thing.
Like, I assume that I am a neural net predicting things and deciding things and if you had full access to my brain you could (in principle, given sufficient time) understand everything that was going on in there. But, like, one way or another I experience the perception of perceiving things.
(I’d prefer to taboo ‘Qualia’ in case it has particular connotations I don’t share. Just ‘that thing where Ray perceives himself perceiving things, and perhaps the part where sometimes Ray has preferences about those perceptions of perceiving because the perceptions have valence.’ If that’s what Qualia means, cool, and if it means some other thing I’m not sure I care)
My current working model of “how this aspect of my perception works” is described in this comment, I guess easy enough to quote in full:
“Human brains contain two forms of knowledge: - explicit knowledge and weights that are used in implicit knowledge (admittedly the former is hacked on top of the later, but that isn’t relevant here). Mary doesn’t gain any extra explicit knowledge from seeing blue, but her brain changes some of her implicit weights so that when a blue object activates in her vision a sub-neural network can connect this to the label “blue”.”
The reason I care about any of this is that I believe that a “perceptions-having-valence” is probably morally relevant. (or, put in usual terms: suffering and pleasure seem morally relevant).
(I think it’s quite possibe that future-me will decide I was confused about this part, but it’s the part I care about anyhow)
Are you saying the my perceiving-that-I-perceive-things-with-valence is an illusion, and that I am in fact not doing that? Or some other thing?
(To be clear, I AM open to ‘actually Ray yes, the counterintuitive answer is that no, you’re not actually perceiving-that-you-perceive-things-and-some-of-the-perceptions-have-valence.’ The topic is clearly confusing and behind the veil of epistemic-ignorance it seems quite plausible I’m the confused one here. Just noting that so far that from way you’re phrasing things I can’t tell whether your claims map onto the things I care about )
Like, I assume that I am a neural net predicting things and deciding things and if you had full access to my brain you could (in principle, given sufficient time) understand everything that was going on in there. But, like, one way or another I experience the perception of perceiving things.
To me this is a bit like the claim of someone who claimed psychic powers but still wanted to believe in physics who would say, “I assume you could perfectly well understand what was going on at a behavioral level within my brain, but there is still a datum left unexplained: the datum of me having psychic powers.”
There are a number of ways to respond to the claim:
We could redefine psychic powers to include mere physical properties. This has the problem that psychics insist that psychic power is entirely separate from physical properties. Simple re-definition doesn’t make the intuition go away and doesn’t explain anything.
We could alternatively posit new physics which incorporates psychic powers. This has the occasional problem that it violates Occam’s razor, since the old physics was completely adequate. Hence the debunking argument I presented above.
Or, we could incorporate the phenomenon within a physical model by first denying that it exists and then explaining the mechanism which caused you to believe in it, and talk about it.
In the case of consciousness, the third response amounts to Illusionism, which is the view that I am defending. It has the advantage that it conservatively doesn’t promise to contradict known physics, and it also does justice to the intuition that consciousness really exists.
I’d prefer to taboo ‘Qualia’ in case it has particular connotations I don’t share. Just ‘that thing where Ray perceives himself perceiving things, and perhaps the part where sometimes Ray has preferences about those perceptions of perceiving because the perceptions have valence.’
To most philosophers who write about it, qualia is defined as the experience of what it’s like. Roughly speaking, I agree with thinking of it as a particular form of perception that we experience.
However, it’s not just any perception, since some perceptions can be unconscious perceptions. Qualia specifically refer to the qualitative aspects of our experience of the world: the taste of wine, the touch of fabric, the feeling of seeing blue, the suffering associated with physical pain etc. These are said to be directly apprehensible to our ‘internal movie’ that is playing inside our head. It is this type of property which I am applying the framework of illusionism to.
The reason I care about any of this is that I believe that a “perceptions-having-valence” is probably morally relevant.
I agree. That’s why I typically take the view that consciousness is a powerful illusion, and that we should take it seriously. Those who simply re-define consciousness as essentially a synonym for “perception” or “observation” or “information” are not doing justice to the fact that it’s the thing I care about in this world. I have a strong intuition that consciousness is what is valuable even despite the fact that I hold an illusionist view. To put it another way, I would care much less if you told me a computer was receiving a pain-signal (labeled in the code as some variable with suffering set to maximum), compared to the claim that a computer was actually suffering in the same way a human does.
Are you saying the my perceiving-that-I-perceive-things-with-valence is an illusion, and that I am in fact not doing that? Or some other thing?
Roughly speaking, yes. I am denying that that type of thing actually exists, including the valence claim.
Or, we could incorporate the phenomenon within a physical model by first denying that it exists and then explaining the mechanism which caused you to believe in it, and talk about it.
It still feels very important that you haven’t actually explained this.
In the case of psychic powers, I (think?) we actually have pretty good explanations for where perceptions of psychic powers comes from, which makes the perception of psychic powers non-mysterious. (i.e. we know how cold reading works, and how various kinds of confirmation bias play into divination). But, that was something that actually had to be explained.
It feels like you’re just changing the name of the confusing thing from ‘the fact that I seem conscious to myself’ to ‘the fact that I’m experiencing an illusion of consciousness.’ Cool, but, like, there’s still a mysterious thing that seems quite important to actually explain.
Also just in general, I disagree that skepticism is not progress. If I said, “I don’t believe in God because there’s nothing in the universe with those properties...” I don’t think it’s fair to say, “Cool, but like, I’m still praying to something right, and that needs to be explained” because I don’t think that speaks fully to what I just denied.
In the case of religion, many people have a very strong intuition that God exists. So, is the atheist position not progress because we have not explained this intuition?
I agree that skepticism generally can be important progress (I recently stumbled upon this old comment making a similar argument about how saying “not X” can be useful)
The difference between God and consciousness is that the interesting bit about consciousness *is* my perception of it, full stop. Unlike God or psychic powers, there is no separate thing from my perception of it that I’m interested in.
The difference between God and consciousness is that the interesting bit about consciousness *is* my perception of it, full stop.
If by perception you simply mean “You are an information processing device that takes signals in and outputs things” then this is entirely explicable on our current physical models, and I could dissolve the confusion fairly easily.
However, I think you have something else in mind which is that there is somehow something left out when I explain it by simply appealing to signal processing. In that sense, I think you are falling right into the trap! You would be doing something similar to the person who said, “But I am still praying to God!”
However, I think you have something else in mind which is that there is somehow something left out when I explain it by simply appealing to signal processing. In that sense,
I don’t have anything else in mind that I know of. “Explained via signal processing” seems basically sufficent. The interesting part is “how can you look at a given signal-processing-system, and predict in advance whether that system is the sort of thing that would talk* about Qualia, if it could talk?”
(I feel like this was all covered in the sequences, basically?)
*where “talk about qualia” is shorthand ‘would consider the concept of qualia important enough to have a concept for.’”
I mean, I agree that this was mostly covered in the sequences. But I also think that I disagree with the way that most people frame the debate. At least personally I have seen people who I know have read the sequences still make basic errors. So I’m just leaving this here to explain my point of view.
Intuition: On a first approximation, there is something that it is like to be us. In other words, we are beings who have qualia.
Counterintuition: In order for qualia to exist, there would need to exist entities which are private, ineffable, intrinsic, subjective and this can’t be since physics is public, effable, and objective and therefore contradicts the existence of qualia.
Intuition: But even if I agree with you that qualia don’t exist, there still seems to be something left unexplained.
Counterintuition: We can explain why you think there’s something unexplained because we can explain the cause of your belief in qualia, and why you think they have these properties. By explaining why you believe it we have explained all there is to explain.
Intuition: But you have merely said that we could explain it. You have not have actually explained it.
Counterintuition: Even without the precise explanation, we now have a paradigm for explaining consciousness, so it is not mysterious anymore.
We do not telepathically receive experiemnt results when they are performed. In reality you need ot intake the measumrent results from your first-person point of view (use eyes to read led screen or use ears to hear about stories of experiments performed). It seems to be taht experiments are intersubjective in that other observers will report having experiences that resemble my first-hand experiences. For most purposes shorthanding this to “public” is adequate enough. But your point of view is “unpublisable” in that even if you really tried there is no way to provide you private expereience to the public knowledge pool (“directly”). “I now how you feel” is a fiction it doesn’t actually happen.
Skeptisim about the experiencing of others is easier but being skeptical about your own experiences would seem to be ludicrous.
I am not denying that humans take in sensory input and process it using their internal neural networks. I am denying that process has any of the properties associated with consciousness in the philosophical sense. And I am making an additional claim which is that if you merely redefine consciousness so that it lacks these philosophical properties, you have not actually explained anything or dissolved any confusion.
The illusionist approach is the best approach because it simultaneously takes consciousness seriously and doesn’t contradict physics. By taking this approach we also have an understood paradigm for solving the hard problem of consciousness: namely, the hard problem is reduced to the meta-problem (see Chalmers).
It feels like you’re just changing the name of the confusing thing from ‘the fact that I seem conscious to myself’ to ‘the fact that I’m experiencing an illusion of consciousness.’ Cool, but, like, there’s still a mysterious thing that seems quite important to actually explain.
I don’t actually agree. Although I have not fully explained consciousness, I think that I have shown a lot.
In particular, I have shown us what the solution to the hard problem of consciousness would plausibly look like if we had unlimited funding and time. And to me, that’s important.
And under my view, it’s not going to look anything like, “Hey we discovered this mechanism in the brain that gives rise to consciousness.” No, it’s going to look more like, “Look at this mechanism in the brain that makes humans talk about things even though the things they are talking about have no real world referent.”
You might think that this is a useless achievement. I claim the contrary. As Chalmers points out, pretty much all the leading theories of consciousness fail the basic test of looking like an explanation rather than just sounding confused. Don’t believe me? Read Section 3 in this paper.
In short, Chalmers reviews the current state of the art in consciousness explanations. He first goes into Integrated Information Theory (IIT), but then convincingly shows that IIT fails to explain why we would talk about consciousness and believe in consciousness. He does the same for global workspace theories, first order representational theories, higher order theories, consciousness-causes-collapse theories, and panpsychism. Simply put, none of them even approach an adequate baseline of looking like an explanation.
I also believe that if you follow my view carefully you might stop being confused about a lot of things. Like, do animals feel pain? Well it depends on your definition of pain—consciousness is not real in any objective sense so this is a definition dispute. Same with asking whether person A is happier than person B, or asking whether computers will ever be conscious.
Perhaps this isn’t an achievement strictly speaking relative to the standard Lesswrong points of view. But that’s only because I think the standard Lesswrong point of view is correct. Yet even so, I still see people around me making fundamentally basic mistakes about consciousness. For instance, I see people treating consciousness as intrinsic, ineffable, private—or they think there’s an objectively right answer to whether animals feel pain and argue over this as if it’s not the same as a tree falling in a forest.
And we know that such an explanation requires only the components which make up the ANN, and not any conscious or phenomenal properties.
That’s an argument against dualism not an argument against qualia. If mind brain identity is true, neural activity is causing reports, and qualia, along with the rest of consciousness are identical to neural activity, so qualia are also causing reports.
If you identify qualia as behavioral parts of our physical models, then are you also willing to discard the properties philosophers have associated with qualia, such as
Ineffable, as they can’t be explained using just words or mathematical sentences
Private, as they are inaccessible to outside third-person observers
Intrinsic, as they are fundamental to the way we experience the world
If you are willing to discard these properties, then I suggest we stop using the world “qualia” since you have simply taken all the meaning away once you have identified them with things that actually exist. This is what I mean when I say that I am denying qualia.
It is analogous to someone who denies that souls exist by first conceding that we could identify certain physical configurations as examples of souls, but then explaining that this would be confusing to anyone who talks about souls in the traditional sense. Far better in my view to discard the idea altogether.
My orientation to this conversation seems more like “hmm, I’m learning that it is possible the word qualia has a bunch of connotations that I didn’t know it had”, as opposed to “hmm, I was wrong to believe in the-thing-I-was-calling-qualia.”
But I’m not yet sure that these connotations are actually universal – the wikipedia article opens with:
In philosophy and certain models of psychology, qualia (/ˈkwɑːliə/ or /ˈkweɪliə/; singular form: quale) are defined as individual instances of subjective, conscious experience. The term qualia derives from the Latinneuter plural form (qualia) of the Latin adjective quālis(Latin pronunciation: [ˈkʷaːlɪs]) meaning “of what sort” or “of what kind” in a specific instance, like “what it is like to taste a specific apple, this particular apple now”.
Examples of qualia include the perceived sensation of pain of a headache, the taste of wine, as well as the redness of an evening sky. As qualitative characters of sensation, qualia stand in contrast to “propositional attitudes”,[1] where the focus is on beliefs about experience rather than what it is directly like to be experiencing.
Philosopher and cognitive scientist Daniel Dennett once suggested that qualia was “an unfamiliar term for something that could not be more familiar to each of us: the ways things seem to us”.[2]
Much of the debate over their importance hinges on the definition of the term, and various philosophers emphasize or deny the existence of certain features of qualia. Consequently, the nature and existence of various definitions of qualia remain controversial because they are not verifiable.
Later on, it notes the three characteristics (ineffable/private/intrinsic) that Dennett listed.
But this looks more like an accident of history than something intrinsic to the term. The opening paragraphs defined qualia the way I naively expected it to be defined.
My impression looking at the various defintions and discussion is not that qualia was defined in this specific fashion, so much as various people trying to grapple with a confusing problem generated various possible definitions and rules for it, and some of those turned out to be false once we came up with better understanding.
I can see where you’re coming from with the soul analogy, but I’m not sure if it’s more like the soul analogy, or more like “One early philosopher defined ‘a human’ as a featherless biped, and then a later one said “dude, look at this featherless chicken I just made” and they realized the definition was silly.
I guess my question here is – do you have a suggestion for a replacement word for “the particular kind of observation that gets made by an entity that actually gets to experience the perception”? This still seems importantly different from “just a perception”, since very simple robots and thermostats or whatever can be said to have those. I don’t really care whether they are inherently private, ineffable or intrinsic, and whether Daniel Dennett was able to eff them seems more like a historical curiosity to me.
The wikipedia article specifically says that they people argue a lot over the definitions:
There are many definitions of qualia, which have changed over time. One of the simpler, broader definitions is: “The ‘what it is like’ character of mental states. The way it feels to have mental states such as pain, seeing red, smelling a rose, etc.”
That definition there is the one I’m generally using, and the one which seems important to have a word for. This seems more like a political/coordination question of “is it easier to invent a new word and gain traction for it, or get everyone on page about ‘actually, they’re totally in principle effable, you just might need to be a kind of mind different than a current-generation-human to properly eff them.’
It does seem to me something like “I expect the sort of mind that is capable of viewing qualia of other people would be sufficiently different from a human mind that it may still be fair to call them ‘private/ineffable among humans.’”
I know I’m not being as clear as I could possibly be, and at some points I sort of feel like just throwing “Quining Qualia” or Keith Frankish’s articles or a whole bunch of other blog posts at people and say, “Please just read this and re-read it until you have a very distinct intuition about what I am saying.” But I know that that type of debate is not helpful.
I think I have a OK-to-good understanding of what you are saying. My model of your reply is something like this,
“Your claim is that qualia don’t exist because nothing with these three properties exists (ineffability/private/intrinsic), but it’s not clear to me that these three properties are universally identified with qualia. When I go to Wikipedia or other sources, they usually identify qualia with ‘what it’s like’ rather than these three very specific things that Daniel Dennett happened to list once. So, I still think that I am pointing to something real when I talk about ‘what it’s like’ and you are only disputing a perhaps-strawman version of qualia.”
Please correct me if this model of you is inaccurate.
I recognize what you are saying, and I agree with the place you are coming from. I really do. And furthermore, I really really agree with the idea that we should go further than skepticism and we should always ask more questions even after we have concluded that something doesn’t exist.
However, the place I get off the boat is where you keep talking about how this ‘what it’s like’ thing is actually referring to something coherent in the real world that has a crisp, natural boundary around it. That’s the disagreement.
I don’t think it’s an accident of history either that those properties are identified with qualia. The whole reason Daniel Dennett identified them was because he showed that they were the necessary conclusion of the sort of thought experiments people use for qualia. He spends the whole first several paragraphs justifying them using various intuition pumps in his essay on the matter.
Point being, when you are asked to clarify what ‘what it’s like’ means, you’ll probably start pointing to examples. Like, you might say, “Well, I know what it’s like to see the color green, so that’s an example of a quale.” And Daniel Dennett would then press the person further and go, “OK could you clarify what you mean when you say you ‘know what it’s like to see green’?” and the person would say, “No, I can’t describe it using words. And it’s not clear to me it’s even in the same category of things that can be either, since I can’t possibly conceive of an English sentence that would describe the color green to a blind person.” And then Daniel Dennett would shout, “Aha! So you do believe in ineffability!”
The point of those three properties (actually he lists 4, I think), is not that they are inherently tied to the definition. It’s that the definition is vague, and every time people are pressed to be more clear on what they mean, they start spouting nonsense. Dennett did valid and good deconfusion work where he showed that people go wrong in these four places, and then showed how there’s no physical thing that could possibly allow those four things.
These properties also show up all over the various thought experiments that people use when talking about qualia. For example, Nagel uses the private property in his essay “What Is it Like to Be a Bat?” Chalmers uses the intrinsic property when he talks about p-zombies being physically identical to humans in every respect except for qualia. Frank Jackson used the ineffability property when he talked about how Mary the neuroscientist had something missing when she was in the black and white room.
All of this is important to recognize. Because if you still want to say, “But I’m still pointing to something valid and real even if you want to reject this other strawman-entity” then I’m going to treat you like the person who wants to believe in souls even after they’ve been shown that nothing soul-like exists in this universe.
Spouting nonsense is different from being wrong. If I say that there are no rectangles with 5 angles that can be processed pretty straght forwardly because the concept of a rectangle is unproblematic. But if you seek why that statement was made and the person points to a pentagon you will find 5 angles. Now there are polygons with 5 angles. If you give a short word for 5 angle rectangle” it’s correct to say those don’t exists. But if you give an ostensive definition of the shape then it does exist and it’s more to the point to say that it’s not a rectangle rather that it doesn’t exist.
In the details when persons say “what it is like to see green” one could fail to get what they mean or point to. If someone says “look a unicorn” and one has proof that unicorns don’t exist that doesn’t mean that the unicorn reference is not referencing something or that the reference target does not exist. If you end up in a situation where you point at a horse and say “those things do not exist. Look no horn, doesn’t exist” you are not being helpful. If somebody is pointing to a horse and says “look, a unicorn!” and you go “where? I see only horses” you are also not being helpful. Being “motivatedly uncooperative in ostension receiving” is not cool. Say that you made a deal to sell a gold bar in exchange for a unicorn. Then refusing to accept any object as an unicorn woud let you keep your gold bar and you migth be tempted to play dumb.
When people are saying “what it feels like to see green” they are trying to communicate something and failing their assertion by sabotaging their communication doesn’t prove anything. Communication is hard yes but doing too much semantics substitution means you start talking past each other.
I am not suggesting that qualia should be identified with neural activity in a way that loses any aspects of the philosophical definition… bearing in mind that the he philosophical definition does not assert that qualia are non physical.
I won’t lie—I have a very strong intuition that there’s this visual field in front of me, and that I can hear sounds that have distinct qualities, and simultaneously I can feel thoughts rush into my head as if there is an internal speaker and listener. And when I reflect on some visual in the distance, it seems as though the colors are very crisp and exist in some way independent of simple information processing in a computer-type device. It all seems very real to me.
I think the main claim of the illusionist is that these intuitions (at least insofar as the intuitions are making claims about the properties of qualia) are just radically incorrect. It’s as if our brains have an internal error in them, not allowing us to understand the true nature of these entities. It’s not that we can’tsee or something like that. It’s just that the quality of perceiving the world has essentially an identical structure to what one might imagine a computer with a camera would “see.”
Analogy: Some people who claim to have experienced heaven aren’t just making stuff up. In some sense, their perception is real. It just doesn’t have the properties we would expect it to have at face value. And if we actually tried looking for heaven in the physical world we would find it to be little else than an illusion.
What’s the difference between making claims about nearby objects and making claims about qualia (if there is one)? If I say there’s a book to my left, is that saying something about qualia? If I say I dreamt about a rabbit last night, is that saying something about qualia?
(Are claims of the form “there is a book to my left” radically incorrect?)
That is, is there a way to distinguish claims about qualia from claims about local stuff/phenomena/etc?
Sure. There are a number of properties usually associated with qualia which are the things I deny. If we strip these properties away (something Kieth Frankish refers to as zero qualia) then we can still say that they exist. But it’s confusing to say that something exists when its properties are so minimal. Daniel Dennett listed a number of properties that philosophers have assigned to qualia and conscious experience more generally:
Ineffable because there’s something Mary the neuroscientist is missing when she is in the black and white room. And someone who tried explaining color to her would not be able to fully.
Intrinsic because it cannot be reduced to bare physical entities, like electrons (think: could you construct a quale if you had the right set of particles?).
Private because they are accessible to us and not globally available. In this sense, if you tried to find out the qualia that a mouse was experiencing as it fell victim to a trap, you would come up fundamentally short because it was specific to the mouse mind and not yours. Or as Nagel put it, there’s no way that third person science could discover what it’s like to be a bat.
Directly apprehensible because they are the elementary things that make up our experience of the world. Look around and qualia are just what you find. They are the building blocks of our perception of the world.
It’s not necessarily that none of these properties could be steelmanned. It is just that they are so far from being steelmannable that it is better to deny their existence entirely. It is the same as my analogy with a person who claims to have visited heaven. We could either talk about it as illusory or non-illusory. But for practical purposes, if we chose the non-illusory route we would probably be quite confused. That is, if we tried finding heaven inside the physical world, with the same properties as the claimant had proposed, then we would come up short. Far better then, to treat it as a mistake inside of our cognitive hardware.
Thanks for the elaboration. It seems to me that experiences are:
Hard-to-eff, as a good-enough theory of what physical structures have which experiences has not yet been discovered, and would take philosophical work to discover.
Hard to reduce to physics, for the same reason.
In practice private due to mind-reading technology not having been developed, and due to bandwidth and memory limitations in human communication. (It’s also hard to imagine what sort of technology would allow replicating the experience of being a mouse)
Pretty directly apprehensible (what else would be? If nothing is, what do we build theories out of?)
It seems natural to conclude from this that:
Physical things exist.
Experiences exist.
Experiences probably supervene on physical things, but the supervenience relation is not yet determined, and determining it requires philosophical work.
Given that we don’t know the supervenience relation yet, we need to at least provisionally have experiences in our ontology distinct from physical entities. (It is, after all, impossible to do physics without making observations and reporting them to others)
Here’s a thought experiment which helped me lose my ‘belief’ in qualia: would a robot scientist, who was only designed to study physics and make predictions about the world, ever invent qualia as a hypothesis?
Assuming the actual mouth movements we make when we say things like, “Qualia exist” are explainable via the scientific method, the robot scientist could still predict that we would talk and write about consciousness. But would it posit consciousness as a separate entity altogether? Would it treat consciousness as a deep mystery, even after peering into our brains and finding nothing but electrical impulses?
Robots take in observations. They make theories that explain their observations. Different robots will make different observations and communicate them to each other. Thus, they will talk about observations.
After making enough observations they make theories of physics. (They had to talk about observations before they made low-level physics theories, though; after all, they came to theorize about physics through their observations). They also make bridge laws explaining how their observations are related to physics. But, they have uncertainty about these bridge laws for a significant time period.
The robots theorize that humans are similar to them, based on the fact that they have functionally similar cognitive architecture; thus, they theorize that humans have observations as well. (The bridge laws they posit are symmetric that way, rather than being silicon-chauvinist)
I think you are using the word “observation” to refer to consciousness. If this is true, then I do not deny that humans take in observations and process them.
However, I think the issue is that you have simply re-defined consciousness into something which would be unrecognizable to the philosopher. To that extent, I don’t say you are wrong, but I will allege that you have not done enough to respond to the consciousness-realist’s intuition that consciousness is different from physical properties. Let me explain:
If qualia are just observations, then it seems obvious that Mary is not missing any information in her room, since she can perfectly well understand and model the process by which people receive color observations.
Likewise, if qualia are merely observations, then the Zombie argument amounts to saying that p-Zombies are beings which can’t observe anything. This seems patently absurd to me, and doesn’t seem like it’s what Chalmers meant at all when he came up with the thought experiment.
Likewise, if we were to ask, “Is a bat conscious?” then the answer would be a vacuous “yes” under your view, since they have echolocaterswhich take in observations and process information.
In this view even my computer is conscious since it has a camera on it. For this reason, I suggest we are talking about two different things.
Mary’s room seems uninteresting, in that robot-Mary can predict pretty well what bit-pattern she’s going to get upon seeing color. (To the extent that the human case is different, it’s because of cognitive architecture constraints)
Regarding the zombie argument: The robots have uncertainty over the bridge laws. Under this uncertainty, they may believe it is possible that humans don’t have experiences, due to the bridge laws only identifying silicon brains as conscious. Then humans would be zombies. (They may have other theories saying this is pretty unlikely / logically incoherent / etc)
Basically, the robots have a primitive entity “my observations” that they explain using their theories. They have to reconcile this with the eventual conclusion they reach that their observations are those of a physically instantiated mind like other minds, and they have degrees of freedom in which things they consider “observations” of the same type as “my observations” (things that could have been observed).
As a qualia denier, I sometimes feel like I side more with the Chalmers side of the argument, which at least admits that there’s a strong intuition for consciousness. It’s not that I think that the realist side is right, but it’s that I see the naive physicalists making statements that seem to completely misinterpret the realist’s argument.
I don’t mean to single you out in particular. However, you state that Mary’s room seems uninteresting because Mary is able to predict the “bit pattern” of color qualia. This seems to me to completely miss the point. When you look at the sky and see blue, is it immediately apprehensible as a simple bit pattern? Or does it at least seem to have qualitative properties too?
I’m not sure how to import my argument onto your brain without you at least seeing this intuition, which is something I considered obvious for many years.
There is a qualitative redness to red. I get that intuition.
I think “Mary’s room is uninteresting” is wrong; it’s uninteresting in the case of robot scientists, but interesting in the case of humans, in part because of what it reveals about human cognitive architecture.
I think in the human case, I would see Mary seeing a red apple as gaining in expressive vocabulary rather than information. She can then describe future things as “like what I saw when I saw that first red apple”. But, in the case of first seeing the apple, the redness quale is essentially an arbitrary gensym.
I suppose I might end up agreeing with the illusionist view on some aspects of color perception, then, in that I predict color quales might feel like new information when they actually aren’t. Thanks for explaining.
I predict color quales might feel like new information when they actually aren’t.
I am curious if you disagree with the claim that (human) Mary is gaining implicit information, in that (despite already knowing many facts about red-ness), her (human) optic system wouldn’t have successfully been able to predict the incoming visual data from the apple before seeing it, but afterwards can?
Now that I think about it, due to this cognitive architecture issue, she actually does gain new information. If she sees a red apple in the future, she can know that it’s red (because it produces the same qualia as the first red apple), whereas she might be confused about the color if she hadn’t seen the first apple.
I think I got confused because, while she does learn something upon seeing the first red apple, it isn’t the naive “red wavelengths are red-quale”, it’s more like “the neurons that detect red wavelengths got wired and associated with the abstract concept of red wavelengths.” Which is still, effectively, new information to Mary-the-cognitive-system, given limitations in human mental architecture.
A physicist might discover that you can make computers out of matter. You can make such computers produce sounds. In processing sounds “homonym” is a perfectly legimate and useful concept. Even if two words are stored in far away hardware locations knowing that they will “sound detection clash” is important information. Even if you slice it a little differently and use different kinds of computer architechtures it woudl still be a real phenomenon.
In technical terms there might be the issue whether its meaningful to differntiate between founded concepts and hypothesis. If hypotheses are required then you could have a physicist that didn’t ever talk about temperature.
It seems to me that you are trying to recover the properties of conscious experience in a way that can be reduced to physics. Ultimately, I just feel that this approach is not likely to succeed without radical revisions to what you consider to be conscious experience. :)
Generally speaking, I agree with the dualists who argue that physics is incompatible with the claimed properties of qualia. Unlike the dualists, I see this as a strike against qualia rather than a strike against physics. David Chalmers does a great job in his articles outlining why conscious properties don’t fit nicely in our normal physical models.
It’s not simply that we are awaiting more data to fill in the details: it’s that there seems to be no way even in principle to incorporate conscious experience into physics. Physics is just a different type of beast: it has no mental core, it is entirely made up of mathematical relations, and is completely global. Consciousness as it’s described seems entirely inexplicable in that respect, and I don’t see how it could possibly supervene on the physical.
One could imagine a hypothetical heaven-believer (someone who claimed to have gone to heaven and back) listing possible ways to incorporate their experience into physics. They could say,
Hard-to-eff, as it’s not clear how physics interacts with the heavenly realm. We must do more work to find out where the entry points of heaven and earth are.
In practice private due to the fact that technology hasn’t been developed yet that can allow me to send messages back from heaven while I’m there.
Pretty directly apprehensible because how would it even be possible for me to have experienced that without heaven literally being real!
On the other hand, a skeptic could reply that:
Even if mind reading technology isn’t good enough yet, our best models say that humans can be described as complicated computers with a particular neural network architecture. And we know that computers can have bugs in them causing them to say things when there is no logical justification.
Also, we know that computers can lack perfect introspection so we know that even if it is utterly convinced that heaven is real, this could just be due to the fact that the computer is following its programming and is exceptionally stubborn.
Heaven has no clear interpretation in our physical models. Yes, we could see that a supervenience is possible. But why rely on that hope? Isn’t it better to say that the belief is caused by some sort of internal illusion? The latter hypothesis is at least explicable within our models and doesn’t require us to make new fundamental philosophical advances.
It seems that doubting that we have observations would cause us to doubt physics, wouldn’t it? Since physics-the-discipline is about making, recording, communicating, and explaining observations.
Why think we’re in a physical world if our observations that seem to suggest we are are illusory?
This is kind of like if the people saying we live in a material world arrived at these theories through their heaven-revelations, and can only explain the epistemic justification for belief in a material world by positing heaven. Seems odd to think heaven doesn’t exist in this circumstance.
(Note, personally I lean towards supervenient neutral monism: direct observation and physical theorizing are different modalities for interacting with the same substance, and mental properties supervene on physical ones in a currently-unknown way. Physics doesn’t rule out observation, in fact it depends on it, while itself being a limited modality, such that it is unsurprising if you couldn’t get all modalities through the physical-theorizing modality. This view seems non-contradictory, though incomplete.)
There is the phenomenon of qualia and then there is the ontological extension. The word does not refer to the ontological extension.
It would be like explaining lightning with lightning. Sure when we dig down there are non-lightning parts. But lightning still zaps people.
Or it would be a category error like saying that if you can explain physics without coordinates by only positing that energy exists you should drop coordinates from your concepts. But coordinates are not a thing to believe in, it’s a conceptual tool to specify claims not a hypothesis in itself. When physists believe in a particular field theory they are not agreeing with the greek philosphers that think that the world is made of a type of number.
There is the phenomenon of qualia and then there is the ontological extension. The word does not refer to the ontological extension.
My basic claim is that the way that people use the word qualia implicitly implies the ontological extensions. By using the term, you are either smuggling these extensions in, or you are using the term in a way that no philosopher uses it. Here are some intuitions:
Qualia are private entities which occur to us and can’t be inspected via third person science.
Qualia are ineffable; you can’t explain them using a sufficiently complex English or mathematical sentence.
Qualia are intrinstic; you can’t construct a quale if you had the right set of particles.
etc.
Now, that’s not to say that you can’t define qualia in such a way that these ontological extensions are avoided. But why do so? If you are simply re-defining the phenomenon, then you have not explained anything. The intuitions above still remain, and there is something still unexplained: namely, why people think that there are entities with the above properties.
That’s why I think that instead, the illusionist approach is the correct one. Let me quote Keith Frankish, who I think does a good job explaining this point of view,
Suppose we encounter something that seems anomalous, in the sense of being radically inexplicable within our established scientific worldview. Psychokinesis is an example. We would have, broadly speaking, three options.
First, we could accept that the phenomenon is real and explore the implications of its existence, proposing major revisions or extensions to our science, perhaps amounting to a paradigm shift. In the case of psychokinesis, we might posit previously unknown psychic forces and embark on a major revision of physics to accommodate them.
Second, we could argue that, although the phenomenon is real, it is not in fact anomalous and can be explained within current science. Thus, we would accept that people really can move things with their unaided minds but argue that this ability depends on known forces, such as electromagnetism.
Third, we could argue that the phenomenon is illusory and set about investigating how the illusion is produced. Thus, we might argue that people who seem to have psychokinetic powers are employing some trick to make it seem as if they are mentally influencing objects.
In the case of lightning, I think that the first approach would be correct, since lightning forms a valid physical category under which we can cast our scientific predictions of the world. In the case of the orbit of Uranus, the second approach is correct, since it was adequately explained by appealing to understood Newtonian physics. However, the third approach is most apt for bizarre phenomena that seem at first glance to be entirely incompatible with our physics. And qualia certainly fit the bill in that respect.
When I say “qualia” I mean individual instances of subjective, conscious experience full stop. These three extensions are not what I mean when I say “qualia”.
Qualia are private entities which occur to us and can’t be inspected via third person science.
Not convinced of this. There are known neural correlates of consciousness. That our current brain scanners lack the required resolution to make them inspectable does not prove that they are not inspectable in principle.
Qualia are ineffable; you can’t explain them using a sufficiently complex English or mathematical sentence.
This seems to be a limitation of human language bandwidth/imagination, but not fundamental to what qualia are. Consider the case of the conjoined twins Krista and Tatiana, who share some brain structure and seem to be able “hear” each other’s thoughts and see through each other’s eyes.
Suppose we set up a thought experiment. Suppose that they grow up in a room without color, like Mary’s room. Now knock out Krista and show Tatiana something red. Remove the red thing before Krista wakes up. Wouldn’t Tatiana be able to communicate the experience of red to her sister? That’s an effable quale!
And if they can do it, then in principle, so could you, with a future brain-computer interface.
Really, communicating at all is a transfer of experience. We’re limited by common ground, sure. We both have to be speaking the same language, and have to have enough experience to be able to imagine the other’s mental state.
Qualia are intrinstic; you can’t construct a quale if you had the right set of particles.
Again, not convinced. Isn’t your brain made of particles? I construct qualia all the time just by thinking about it. (It’s called “imagination”.) I don’t see any reason in principle why this could not be done externally to the brain either.
The Tatiana and krista experiment is quite interesting but stretches the concept of communication to it’s limits. I am inclined to say that having a shared part of your conciousness is not communication in the same way that sharing a house is not traffic. It does strike me that communication involves directed construction of thoughts and it’s easy to imagine that the scope of what this construction is capable would be vastly smaller than what goes on in the brain in other processes. Extending the construction to new types of thoughts might be a soft border rather than a hard one. With enough verbal sentences it should be in principle to be able to reconstruct an actual graphical image, but even with overtly descriptive prose this level is not really reached (I presume) but remains within the realm of sentence-like data structures.
In the example Tatiana directs the visual cortex and Krista can just recall the representation later. But in a single conciouness brain nothing can be made “ready” but it must be assembled by the brain itself from sensory inputs. That is cognitive space probably has small funnels and for signficant objects they can’t travel them as themselfs but must be chopped off into pieces and reassembled after passing the tube.
Let’s extend the thought experiment a bit. Suppose technology is developed to separate the twins. They rely on their shared brain parts for vital functions, so where we cut nerve connections we replace them with a radio transceiver and electrode array in each twin.
Now they are communicating thoughts via a prosthesis. Is that not communication?
Maybe you already know what it is like to be a hive mind with a shared consciousness, because you are one: cutting the corpus callosum creates a split-brained patient that seems to have two different personalities that don’t always agree with each other. Maybe there are some connections left, but the bandwidth has been drastically reduced. And even within hemispheres, the brain seems to be composed of yet smaller modules. Your mind is made of parts that communicate with each other and share experience, and some of it is conscious.
I think the line dividing individual persons is a soft one. A sufficiently high-bandwidth communication interface can blur that boundary, even to the point of fusing consciousness like brain hemispheres. Shared consciousness means shared qualia, even if that connection is later severed, you might still remember what it was like to be the other person. And in that way, qualia could hypothetically be communicated between individuals, or even species.
If you would copy my brain but make it twice as large that copy would be as “lonely” as I would be and this would remain after arbitrary doublings. Single individuals can be extended in space without communicating with other individuals.
The “extended wire” thought experiement doesn’t specify enough how that physical communication line is used. It’s plausible that there is no “verbalization” process like there is an step to write an email if one replaces sonic communication with ip-packet communication. With huge relative distance would come speed of light delays, if one twin was on earth and another on the moon there would be a round trip latency of seconds which probably would distort how the combined brain works. (And I guess with doublign in size would need to come with proportionate slowing to have same function).
I think there is a difference between a information system being spatially extended and having two information systems interface with each other. Say that you have 2 routers or 10 routers on the same length of line. It makes sense to make a distinction that each routers functions “independently” even if they have to be able to suggest each other enough that packets flow throught. To the first router the world “downline” seems very similar whether or not intermediate routers exist. I don’t count information system internal processing as communicating thus I don’t count “thinking” into communicating. Thus the 10 router version does more communicating than the 2 router version.
I think the “verbalization” step does mean that even highbandwidth connection doesn’t automatically mean qualia sharing. I am thinking of plugings that allow programming languages to share code. Even if there is a perfect 1-to-1 compatibility between the abstractions of the languages I think still each language only ever manipulates their version of that representation. Cross-using without translation would make it illdefined what would be correct function but if you do translation then it loses the qualities of the originating programming language. A C sharp integer variable will never contain a haskel integer even if a C sharp integer is constructed to represent the haskel integer. (I guess it would be possible to make a super-language that has integer variables that can contain haskel-integers and C-sharp integers but that language would not be C sharp or haskel). By being a spesific kind of cognitive architechture you are locked into certain representation types which are unescaable outside of turning into another kind ot architechture.
I am assuming that the twins communicating thoughts requires an act of will like speaking does. I do have reasons for this. Watching their faces when they communicate thoughts makes it seem voluntary.
But most of what you are doing when speaking is already subconscious: One can “understand” the rules of grammar well enough to form correct sentences on nearly all attempts, and yet be unable to explain the rules to a computer program (or to a child or ESL student). There is an element of will, but it’s only an element.
It may be the case that even with a high-bandwidth direct-brain interface it would take a lot of time and practice to understand another’s thoughts. Humans have a common cognitive architecture by virtue of shared genes, but most of our individual connectomes are randomized and shaped by individual experience. Our internal representations may thus be highly idiosyncratic, meaning a direct interface would be ad-hoc and only work on one person. How true this is, I can only speculate without more data.
In your programming language analogy, these data types are only abstractions built on top of a more fundamental CPU architecture where the only data types are bytes. Maybe an implementation of C# could be made that uses exactly the same bit pattern for an int as Haskell does. Human neurons work pretty much the same way across individuals, and even cortical columns seem to use the same architecture.
I don’t think the inability to communicate qualia is primarily due to the limitation of language, but due to the limitation of imagination. I can explain what a tesseract is, but that doesn’t mean you can visualize it. I could give you analogies with lower dimensions. Maybe you could understand well enough to make a mental model that gives you good predictions, but you still can’t visualize it. Similarly, I could explain what it’s like to be a tetrachromat, how septarine and octarine are colors distinct from the others, and maybe you can develop a model good enough to make good predictions about how it would work, but again you can’t visualize these colors. This failing is not on English.
Sure the difference between hearing about a tesseract and being able to visualise it is significant but I think the difference might not be an impossibility barrier but just skill level of imagination.
Having learned some echolocation my qualia involved in hearing have changed and it makes it seem possible to be able to make a similar transition from a trichromat visual space into a tetrachromat visual space. The weird thing about it is that my ear receives as much information that it did before but I just pay attention to it differently. Having deficient understanding in the sense of getting things wrong is easy line to draw. But it seems at some point the understanding becomes vivid instead of theorethical.
Qualia are intrinstic; you can’t construct a quale if you had the right set of particles.
I’m pretty sure that’s not what “intrinisc” is supposed to mean. From “The Qualities of Qualia” by David de Leon.
Within philosophy there is a distinction, albeit a
contentious one, between intrinsic and extrinsic
properties. Roughly speaking “extrinsic” seems to
be synonymous with “relational.” The property of
being an uncle, for example, is a property which
depends on (and consists of) a relation to something
else, namely a niece or a nephew. Intrinsic
properties, then, are those which do not depend on
this kind of relation. That qualia are intrinsic means
that their qualitative character can be isolated from
everything else going on in the brain (or elsewhere)
and is not dependent on relations to other mental
states, behaviour or what have you. The idea of the
independence of qualia on any such relation may
well stem from the conceivability of inverted qualia: we can imagine two physically identical brains
having different qualia, or even that qualia are absent from one but not the other.
I find it important in philosophy to be on the clear what you mean. It is one thing to explain and another to define what you mean. You might point to a yellow object and say yellow and somebody that misunderstood might think that you mean “roundness” by yellow. The accuracy is most important when the views are radical and talk in very different worlds. And “disproving” yellow by not being able to pick it out from ostensive differentation is not an argumentative victory but a communicative failure.
Even if we use some other term I think that meaning is important to have. “Plogiston” might sneak in claims but that is just the more reason to have terms that have as little room for smuggling as possible. And we still need good terms to talk about burning. “oxygen” literally means “black maker” but we nowadays understand it as a term to refer to a element which has definitionally very little to do with the color black.
I think the starting point that generated the word refers to a genuine problem. Having qualia in category three would mean that you claim that I do not have experiences. And if qualia is a bad loaded word to refer to the thing to be explained it would be good to make up a new term that refers to that. But to me qualia was just that word. I word like “dark matter” might experience similar “highjack pressure” by having wild claims thrown around about it. And there having things like “warm dark matter”, “wimpy dark matter” makes the classification more fine making the conceptual analysis proceed. But requirements of clear thinking are different from tradition preservance. If you say that “warm dark matter” can’t be the answer the question of dark matter still stands. Even if you succesfully argue that “qualia” can’t be a attractive concept the issue of me not being a p-zombie still remains and it would be expected that some theorethical bending over backwards would happen.
If we are able to explain why you believe in, and talk about qualia without referring to qualia whatsoever in our explanation, then we should reject the existence of qualia as a hypothesis
That argument has an inverse: “If we are able to explain why you believe in, and talk about an external without referring to an external world whatsoever in our explanation, then we should reject the existence of an external world as a hypothesis”.
People want reductive explanation to be unidirectional,so that you have an A and a B, and clearly it is the B which is redundant and can be replaced with A. But not all explanations work in that convenient way...sometimes A and B are mutually redundant, in the sense that you don’t need both.
The moral of the story being to look for the overall best explanation, not just eliminate redundancy.
In discussions about consciousness I find myself repeating the same basic argument against the existence of qualia constantly. I don’t do this just to be annoying: It is just my experience that
1. People find consciousness really hard to think about and has been known to cause a lot of disagreements.
2. Personally I think that this particular argument dissolved perhaps 50% of all my confusion about the topic, and was one of the simplest, clearest arguments that I’ve ever seen.
I am not being original either. The argument is the same one that has been used in various forms across Illusionist/Eliminativist literature that I can find on the internet. Eliezer Yudkowsky used a version of it many years ago. Even David Chalmers, who is quite the formidable consciousness realist, admits in The Meta-Problem of Consciousness that the argument is the best one he can find against his position.
The argument is simply this:
If we are able to explain why you believe in, and talk about qualia without referring to qualia whatsoever in our explanation, then we should reject the existence of qualia as a hypothesis.
This is the standard debunking argument. It has a more general form which can be used to deny the existence of a lot of other non-reductive things: distinct personal identities, gods, spirits, libertarian free will, a mind-independent morality etc. In some sense it’s just an extended version of Occam’s razor, showing us that qualia don’t do anything in our physical theories, and thus can be rejected as things that actually exist out there in any sense.
To me this argument is very clear, and yet I find myself arguing it a lot. I am not sure how else to get people to see my side of it other than sending them a bunch of articles which more-or-less make the exact same argument but from different perspectives.
I think the human brain is built to have a blind spot on a lot of things, and consciousness is perhaps one of them. I think quite a bit how if humanity is not able to think clearly about this thing which we have spent many research years on, then it seems like there might be some other low hanging philosophical fruits still remaining.
Addendum: I am not saying I have consciousness figured out. However, I think it’s analogous to how atheists haven’t “got religion figured out” yet they have at the very least taken their first steps by actually rejecting religion. It’s not a full theory of religious belief, or even a theory at all. It’s just the first thing you do if you want to understand the subject. I roughly agree with Keith Frankish’s take on the matter.
And I assume your claim is that we can explain why I believe in Qualia without referring to qualia?
I haven’t thought that hard about this and am open to that argument. But afaict your comments here so far haven’t actually addressed this question yet.
Edit: to be clear, I don’t really much why other people talk about qualia. I care why I perceive myself to experience things. If it’s an illusion, cool, but then why do I experience the illusion?
If belief is construed as some sort of representation which stands for external reality (as in the case of some correspondence theories of truth), then we can take the claim to be strong prediction of contemporary neuroscience. Ditto for whether we can explain why we talk about qualia.
It’s not that I could explain exactly why you in particular talk about qualia. It’s that we have an established paradigm for explaining it.
It’s similar in the respect that we have an established paradigm for explaining why people report being able to see color. We can model the eye, and the visual cortex, and we have some idea of what neurons do even though we lack the specific information about how the whole thing fits together. And we could imagine that in the limit of perfect neuroscience, we could synthesize this information to trace back the reason why you said a particular thing.
Since we do not have perfect neuroscience, the best analogy would be analyzing the ‘beliefs’ and predictions of an artificial neural network. If you asked me, “Why does this ANN predict that this image is a 5 with 98% probability” it would be difficult to say exactly why, even with full access to the neural network parameters.
However, we know that unless our conception of neural networks is completely incorrect, in principle we could trace exactly why the neural network made that judgement, including the exact steps that caused the neural network to have the parameters that it has in the first place. And we know that such an explanation requires only the components which make up the ANN, and not any conscious or phenomenal properties.
I can’t tell whether we’re arguing about the same thing.
Like, I assume that I am a neural net predicting things and deciding things and if you had full access to my brain you could (in principle, given sufficient time) understand everything that was going on in there. But, like, one way or another I experience the perception of perceiving things.
(I’d prefer to taboo ‘Qualia’ in case it has particular connotations I don’t share. Just ‘that thing where Ray perceives himself perceiving things, and perhaps the part where sometimes Ray has preferences about those perceptions of perceiving because the perceptions have valence.’ If that’s what Qualia means, cool, and if it means some other thing I’m not sure I care)
My current working model of “how this aspect of my perception works” is described in this comment, I guess easy enough to quote in full:
The reason I care about any of this is that I believe that a “perceptions-having-valence” is probably morally relevant. (or, put in usual terms: suffering and pleasure seem morally relevant).
(I think it’s quite possibe that future-me will decide I was confused about this part, but it’s the part I care about anyhow)
Are you saying the my perceiving-that-I-perceive-things-with-valence is an illusion, and that I am in fact not doing that? Or some other thing?
(To be clear, I AM open to ‘actually Ray yes, the counterintuitive answer is that no, you’re not actually perceiving-that-you-perceive-things-and-some-of-the-perceptions-have-valence.’ The topic is clearly confusing and behind the veil of epistemic-ignorance it seems quite plausible I’m the confused one here. Just noting that so far that from way you’re phrasing things I can’t tell whether your claims map onto the things I care about )
To me this is a bit like the claim of someone who claimed psychic powers but still wanted to believe in physics who would say, “I assume you could perfectly well understand what was going on at a behavioral level within my brain, but there is still a datum left unexplained: the datum of me having psychic powers.”
There are a number of ways to respond to the claim:
We could redefine psychic powers to include mere physical properties. This has the problem that psychics insist that psychic power is entirely separate from physical properties. Simple re-definition doesn’t make the intuition go away and doesn’t explain anything.
We could alternatively posit new physics which incorporates psychic powers. This has the occasional problem that it violates Occam’s razor, since the old physics was completely adequate. Hence the debunking argument I presented above.
Or, we could incorporate the phenomenon within a physical model by first denying that it exists and then explaining the mechanism which caused you to believe in it, and talk about it.
In the case of consciousness, the third response amounts to Illusionism, which is the view that I am defending. It has the advantage that it conservatively doesn’t promise to contradict known physics, and it also does justice to the intuition that consciousness really exists.
To most philosophers who write about it, qualia is defined as the experience of what it’s like. Roughly speaking, I agree with thinking of it as a particular form of perception that we experience.
However, it’s not just any perception, since some perceptions can be unconscious perceptions. Qualia specifically refer to the qualitative aspects of our experience of the world: the taste of wine, the touch of fabric, the feeling of seeing blue, the suffering associated with physical pain etc. These are said to be directly apprehensible to our ‘internal movie’ that is playing inside our head. It is this type of property which I am applying the framework of illusionism to.
I agree. That’s why I typically take the view that consciousness is a powerful illusion, and that we should take it seriously. Those who simply re-define consciousness as essentially a synonym for “perception” or “observation” or “information” are not doing justice to the fact that it’s the thing I care about in this world. I have a strong intuition that consciousness is what is valuable even despite the fact that I hold an illusionist view. To put it another way, I would care much less if you told me a computer was receiving a pain-signal (labeled in the code as some variable with suffering set to maximum), compared to the claim that a computer was actually suffering in the same way a human does.
Roughly speaking, yes. I am denying that that type of thing actually exists, including the valence claim.
It still feels very important that you haven’t actually explained this.
In the case of psychic powers, I (think?) we actually have pretty good explanations for where perceptions of psychic powers comes from, which makes the perception of psychic powers non-mysterious. (i.e. we know how cold reading works, and how various kinds of confirmation bias play into divination). But, that was something that actually had to be explained.
It feels like you’re just changing the name of the confusing thing from ‘the fact that I seem conscious to myself’ to ‘the fact that I’m experiencing an illusion of consciousness.’ Cool, but, like, there’s still a mysterious thing that seems quite important to actually explain.
Also just in general, I disagree that skepticism is not progress. If I said, “I don’t believe in God because there’s nothing in the universe with those properties...” I don’t think it’s fair to say, “Cool, but like, I’m still praying to something right, and that needs to be explained” because I don’t think that speaks fully to what I just denied.
In the case of religion, many people have a very strong intuition that God exists. So, is the atheist position not progress because we have not explained this intuition?
I agree that skepticism generally can be important progress (I recently stumbled upon this old comment making a similar argument about how saying “not X” can be useful)
The difference between God and consciousness is that the interesting bit about consciousness *is* my perception of it, full stop. Unlike God or psychic powers, there is no separate thing from my perception of it that I’m interested in.
If by perception you simply mean “You are an information processing device that takes signals in and outputs things” then this is entirely explicable on our current physical models, and I could dissolve the confusion fairly easily.
However, I think you have something else in mind which is that there is somehow something left out when I explain it by simply appealing to signal processing. In that sense, I think you are falling right into the trap! You would be doing something similar to the person who said, “But I am still praying to God!”
I don’t have anything else in mind that I know of. “Explained via signal processing” seems basically sufficent. The interesting part is “how can you look at a given signal-processing-system, and predict in advance whether that system is the sort of thing that would talk* about Qualia, if it could talk?”
(I feel like this was all covered in the sequences, basically?)
*where “talk about qualia” is shorthand ‘would consider the concept of qualia important enough to have a concept for.’”
I mean, I agree that this was mostly covered in the sequences. But I also think that I disagree with the way that most people frame the debate. At least personally I have seen people who I know have read the sequences still make basic errors. So I’m just leaving this here to explain my point of view.
Intuition: On a first approximation, there is something that it is like to be us. In other words, we are beings who have qualia.
Counterintuition: In order for qualia to exist, there would need to exist entities which are private, ineffable, intrinsic, subjective and this can’t be since physics is public, effable, and objective and therefore contradicts the existence of qualia.
Intuition: But even if I agree with you that qualia don’t exist, there still seems to be something left unexplained.
Counterintuition: We can explain why you think there’s something unexplained because we can explain the cause of your belief in qualia, and why you think they have these properties. By explaining why you believe it we have explained all there is to explain.
Intuition: But you have merely said that we could explain it. You have not have actually explained it.
Counterintuition: Even without the precise explanation, we now have a paradigm for explaining consciousness, so it is not mysterious anymore.
This is essentially the point where I leave.
Physics as map is. Note that we can’t compare the map directly to the territory.
We do not telepathically receive experiemnt results when they are performed. In reality you need ot intake the measumrent results from your first-person point of view (use eyes to read led screen or use ears to hear about stories of experiments performed). It seems to be taht experiments are intersubjective in that other observers will report having experiences that resemble my first-hand experiences. For most purposes shorthanding this to “public” is adequate enough. But your point of view is “unpublisable” in that even if you really tried there is no way to provide you private expereience to the public knowledge pool (“directly”). “I now how you feel” is a fiction it doesn’t actually happen.
Skeptisim about the experiencing of others is easier but being skeptical about your own experiences would seem to be ludicrous.
I am not denying that humans take in sensory input and process it using their internal neural networks. I am denying that process has any of the properties associated with consciousness in the philosophical sense. And I am making an additional claim which is that if you merely redefine consciousness so that it lacks these philosophical properties, you have not actually explained anything or dissolved any confusion.
The illusionist approach is the best approach because it simultaneously takes consciousness seriously and doesn’t contradict physics. By taking this approach we also have an understood paradigm for solving the hard problem of consciousness: namely, the hard problem is reduced to the meta-problem (see Chalmers).
I don’t actually agree. Although I have not fully explained consciousness, I think that I have shown a lot.
In particular, I have shown us what the solution to the hard problem of consciousness would plausibly look like if we had unlimited funding and time. And to me, that’s important.
And under my view, it’s not going to look anything like, “Hey we discovered this mechanism in the brain that gives rise to consciousness.” No, it’s going to look more like, “Look at this mechanism in the brain that makes humans talk about things even though the things they are talking about have no real world referent.”
You might think that this is a useless achievement. I claim the contrary. As Chalmers points out, pretty much all the leading theories of consciousness fail the basic test of looking like an explanation rather than just sounding confused. Don’t believe me? Read Section 3 in this paper.
In short, Chalmers reviews the current state of the art in consciousness explanations. He first goes into Integrated Information Theory (IIT), but then convincingly shows that IIT fails to explain why we would talk about consciousness and believe in consciousness. He does the same for global workspace theories, first order representational theories, higher order theories, consciousness-causes-collapse theories, and panpsychism. Simply put, none of them even approach an adequate baseline of looking like an explanation.
I also believe that if you follow my view carefully you might stop being confused about a lot of things. Like, do animals feel pain? Well it depends on your definition of pain—consciousness is not real in any objective sense so this is a definition dispute. Same with asking whether person A is happier than person B, or asking whether computers will ever be conscious.
Perhaps this isn’t an achievement strictly speaking relative to the standard Lesswrong points of view. But that’s only because I think the standard Lesswrong point of view is correct. Yet even so, I still see people around me making fundamentally basic mistakes about consciousness. For instance, I see people treating consciousness as intrinsic, ineffable, private—or they think there’s an objectively right answer to whether animals feel pain and argue over this as if it’s not the same as a tree falling in a forest.
That’s an argument against dualism not an argument against qualia. If mind brain identity is true, neural activity is causing reports, and qualia, along with the rest of consciousness are identical to neural activity, so qualia are also causing reports.
If you identify qualia as behavioral parts of our physical models, then are you also willing to discard the properties philosophers have associated with qualia, such as
Ineffable, as they can’t be explained using just words or mathematical sentences
Private, as they are inaccessible to outside third-person observers
Intrinsic, as they are fundamental to the way we experience the world
If you are willing to discard these properties, then I suggest we stop using the world “qualia” since you have simply taken all the meaning away once you have identified them with things that actually exist. This is what I mean when I say that I am denying qualia.
It is analogous to someone who denies that souls exist by first conceding that we could identify certain physical configurations as examples of souls, but then explaining that this would be confusing to anyone who talks about souls in the traditional sense. Far better in my view to discard the idea altogether.
My orientation to this conversation seems more like “hmm, I’m learning that it is possible the word qualia has a bunch of connotations that I didn’t know it had”, as opposed to “hmm, I was wrong to believe in the-thing-I-was-calling-qualia.”
But I’m not yet sure that these connotations are actually universal – the wikipedia article opens with:
Later on, it notes the three characteristics (ineffable/private/intrinsic) that Dennett listed.
But this looks more like an accident of history than something intrinsic to the term. The opening paragraphs defined qualia the way I naively expected it to be defined.
My impression looking at the various defintions and discussion is not that qualia was defined in this specific fashion, so much as various people trying to grapple with a confusing problem generated various possible definitions and rules for it, and some of those turned out to be false once we came up with better understanding.
I can see where you’re coming from with the soul analogy, but I’m not sure if it’s more like the soul analogy, or more like “One early philosopher defined ‘a human’ as a featherless biped, and then a later one said “dude, look at this featherless chicken I just made” and they realized the definition was silly.
I guess my question here is – do you have a suggestion for a replacement word for “the particular kind of observation that gets made by an entity that actually gets to experience the perception”? This still seems importantly different from “just a perception”, since very simple robots and thermostats or whatever can be said to have those. I don’t really care whether they are inherently private, ineffable or intrinsic, and whether Daniel Dennett was able to eff them seems more like a historical curiosity to me.
The wikipedia article specifically says that they people argue a lot over the definitions:
That definition there is the one I’m generally using, and the one which seems important to have a word for. This seems more like a political/coordination question of “is it easier to invent a new word and gain traction for it, or get everyone on page about ‘actually, they’re totally in principle effable, you just might need to be a kind of mind different than a current-generation-human to properly eff them.’
It does seem to me something like “I expect the sort of mind that is capable of viewing qualia of other people would be sufficiently different from a human mind that it may still be fair to call them ‘private/ineffable among humans.’”
Thanks for engaging with me on this thing. :)
I know I’m not being as clear as I could possibly be, and at some points I sort of feel like just throwing “Quining Qualia” or Keith Frankish’s articles or a whole bunch of other blog posts at people and say, “Please just read this and re-read it until you have a very distinct intuition about what I am saying.” But I know that that type of debate is not helpful.
I think I have a OK-to-good understanding of what you are saying. My model of your reply is something like this,
“Your claim is that qualia don’t exist because nothing with these three properties exists (ineffability/private/intrinsic), but it’s not clear to me that these three properties are universally identified with qualia. When I go to Wikipedia or other sources, they usually identify qualia with ‘what it’s like’ rather than these three very specific things that Daniel Dennett happened to list once. So, I still think that I am pointing to something real when I talk about ‘what it’s like’ and you are only disputing a perhaps-strawman version of qualia.”
Please correct me if this model of you is inaccurate.
I recognize what you are saying, and I agree with the place you are coming from. I really do. And furthermore, I really really agree with the idea that we should go further than skepticism and we should always ask more questions even after we have concluded that something doesn’t exist.
However, the place I get off the boat is where you keep talking about how this ‘what it’s like’ thing is actually referring to something coherent in the real world that has a crisp, natural boundary around it. That’s the disagreement.
I don’t think it’s an accident of history either that those properties are identified with qualia. The whole reason Daniel Dennett identified them was because he showed that they were the necessary conclusion of the sort of thought experiments people use for qualia. He spends the whole first several paragraphs justifying them using various intuition pumps in his essay on the matter.
Point being, when you are asked to clarify what ‘what it’s like’ means, you’ll probably start pointing to examples. Like, you might say, “Well, I know what it’s like to see the color green, so that’s an example of a quale.” And Daniel Dennett would then press the person further and go, “OK could you clarify what you mean when you say you ‘know what it’s like to see green’?” and the person would say, “No, I can’t describe it using words. And it’s not clear to me it’s even in the same category of things that can be either, since I can’t possibly conceive of an English sentence that would describe the color green to a blind person.” And then Daniel Dennett would shout, “Aha! So you do believe in ineffability!”
The point of those three properties (actually he lists 4, I think), is not that they are inherently tied to the definition. It’s that the definition is vague, and every time people are pressed to be more clear on what they mean, they start spouting nonsense. Dennett did valid and good deconfusion work where he showed that people go wrong in these four places, and then showed how there’s no physical thing that could possibly allow those four things.
These properties also show up all over the various thought experiments that people use when talking about qualia. For example, Nagel uses the private property in his essay “What Is it Like to Be a Bat?” Chalmers uses the intrinsic property when he talks about p-zombies being physically identical to humans in every respect except for qualia. Frank Jackson used the ineffability property when he talked about how Mary the neuroscientist had something missing when she was in the black and white room.
All of this is important to recognize. Because if you still want to say, “But I’m still pointing to something valid and real even if you want to reject this other strawman-entity” then I’m going to treat you like the person who wants to believe in souls even after they’ve been shown that nothing soul-like exists in this universe.
Spouting nonsense is different from being wrong. If I say that there are no rectangles with 5 angles that can be processed pretty straght forwardly because the concept of a rectangle is unproblematic. But if you seek why that statement was made and the person points to a pentagon you will find 5 angles. Now there are polygons with 5 angles. If you give a short word for 5 angle rectangle” it’s correct to say those don’t exists. But if you give an ostensive definition of the shape then it does exist and it’s more to the point to say that it’s not a rectangle rather that it doesn’t exist.
In the details when persons say “what it is like to see green” one could fail to get what they mean or point to. If someone says “look a unicorn” and one has proof that unicorns don’t exist that doesn’t mean that the unicorn reference is not referencing something or that the reference target does not exist. If you end up in a situation where you point at a horse and say “those things do not exist. Look no horn, doesn’t exist” you are not being helpful. If somebody is pointing to a horse and says “look, a unicorn!” and you go “where? I see only horses” you are also not being helpful. Being “motivatedly uncooperative in ostension receiving” is not cool. Say that you made a deal to sell a gold bar in exchange for a unicorn. Then refusing to accept any object as an unicorn woud let you keep your gold bar and you migth be tempted to play dumb.
When people are saying “what it feels like to see green” they are trying to communicate something and failing their assertion by sabotaging their communication doesn’t prove anything. Communication is hard yes but doing too much semantics substitution means you start talking past each other.
I am not suggesting that qualia should be identified with neural activity in a way that loses any aspects of the philosophical definition… bearing in mind that the he philosophical definition does not assert that qualia are non physical.
What are you experiencing right now? (E.g. what do you see in front of you? In what sense does it seem to be there?)
I won’t lie—I have a very strong intuition that there’s this visual field in front of me, and that I can hear sounds that have distinct qualities, and simultaneously I can feel thoughts rush into my head as if there is an internal speaker and listener. And when I reflect on some visual in the distance, it seems as though the colors are very crisp and exist in some way independent of simple information processing in a computer-type device. It all seems very real to me.
I think the main claim of the illusionist is that these intuitions (at least insofar as the intuitions are making claims about the properties of qualia) are just radically incorrect. It’s as if our brains have an internal error in them, not allowing us to understand the true nature of these entities. It’s not that we can’t see or something like that. It’s just that the quality of perceiving the world has essentially an identical structure to what one might imagine a computer with a camera would “see.”
Analogy: Some people who claim to have experienced heaven aren’t just making stuff up. In some sense, their perception is real. It just doesn’t have the properties we would expect it to have at face value. And if we actually tried looking for heaven in the physical world we would find it to be little else than an illusion.
What’s the difference between making claims about nearby objects and making claims about qualia (if there is one)? If I say there’s a book to my left, is that saying something about qualia? If I say I dreamt about a rabbit last night, is that saying something about qualia?
(Are claims of the form “there is a book to my left” radically incorrect?)
That is, is there a way to distinguish claims about qualia from claims about local stuff/phenomena/etc?
Sure. There are a number of properties usually associated with qualia which are the things I deny. If we strip these properties away (something Kieth Frankish refers to as zero qualia) then we can still say that they exist. But it’s confusing to say that something exists when its properties are so minimal. Daniel Dennett listed a number of properties that philosophers have assigned to qualia and conscious experience more generally:
Ineffable because there’s something Mary the neuroscientist is missing when she is in the black and white room. And someone who tried explaining color to her would not be able to fully.
Intrinsic because it cannot be reduced to bare physical entities, like electrons (think: could you construct a quale if you had the right set of particles?).
Private because they are accessible to us and not globally available. In this sense, if you tried to find out the qualia that a mouse was experiencing as it fell victim to a trap, you would come up fundamentally short because it was specific to the mouse mind and not yours. Or as Nagel put it, there’s no way that third person science could discover what it’s like to be a bat.
Directly apprehensible because they are the elementary things that make up our experience of the world. Look around and qualia are just what you find. They are the building blocks of our perception of the world.
It’s not necessarily that none of these properties could be steelmanned. It is just that they are so far from being steelmannable that it is better to deny their existence entirely. It is the same as my analogy with a person who claims to have visited heaven. We could either talk about it as illusory or non-illusory. But for practical purposes, if we chose the non-illusory route we would probably be quite confused. That is, if we tried finding heaven inside the physical world, with the same properties as the claimant had proposed, then we would come up short. Far better then, to treat it as a mistake inside of our cognitive hardware.
Thanks for the elaboration. It seems to me that experiences are:
Hard-to-eff, as a good-enough theory of what physical structures have which experiences has not yet been discovered, and would take philosophical work to discover.
Hard to reduce to physics, for the same reason.
In practice private due to mind-reading technology not having been developed, and due to bandwidth and memory limitations in human communication. (It’s also hard to imagine what sort of technology would allow replicating the experience of being a mouse)
Pretty directly apprehensible (what else would be? If nothing is, what do we build theories out of?)
It seems natural to conclude from this that:
Physical things exist.
Experiences exist.
Experiences probably supervene on physical things, but the supervenience relation is not yet determined, and determining it requires philosophical work.
Given that we don’t know the supervenience relation yet, we need to at least provisionally have experiences in our ontology distinct from physical entities. (It is, after all, impossible to do physics without making observations and reporting them to others)
Is there something I’m missing here?
Here’s a thought experiment which helped me lose my ‘belief’ in qualia: would a robot scientist, who was only designed to study physics and make predictions about the world, ever invent qualia as a hypothesis?
Assuming the actual mouth movements we make when we say things like, “Qualia exist” are explainable via the scientific method, the robot scientist could still predict that we would talk and write about consciousness. But would it posit consciousness as a separate entity altogether? Would it treat consciousness as a deep mystery, even after peering into our brains and finding nothing but electrical impulses?
Robots take in observations. They make theories that explain their observations. Different robots will make different observations and communicate them to each other. Thus, they will talk about observations.
After making enough observations they make theories of physics. (They had to talk about observations before they made low-level physics theories, though; after all, they came to theorize about physics through their observations). They also make bridge laws explaining how their observations are related to physics. But, they have uncertainty about these bridge laws for a significant time period.
The robots theorize that humans are similar to them, based on the fact that they have functionally similar cognitive architecture; thus, they theorize that humans have observations as well. (The bridge laws they posit are symmetric that way, rather than being silicon-chauvinist)
I think you are using the word “observation” to refer to consciousness. If this is true, then I do not deny that humans take in observations and process them.
However, I think the issue is that you have simply re-defined consciousness into something which would be unrecognizable to the philosopher. To that extent, I don’t say you are wrong, but I will allege that you have not done enough to respond to the consciousness-realist’s intuition that consciousness is different from physical properties. Let me explain:
If qualia are just observations, then it seems obvious that Mary is not missing any information in her room, since she can perfectly well understand and model the process by which people receive color observations.
Likewise, if qualia are merely observations, then the Zombie argument amounts to saying that p-Zombies are beings which can’t observe anything. This seems patently absurd to me, and doesn’t seem like it’s what Chalmers meant at all when he came up with the thought experiment.
Likewise, if we were to ask, “Is a bat conscious?” then the answer would be a vacuous “yes” under your view, since they have echolocaters which take in observations and process information.
In this view even my computer is conscious since it has a camera on it. For this reason, I suggest we are talking about two different things.
Mary’s room seems uninteresting, in that robot-Mary can predict pretty well what bit-pattern she’s going to get upon seeing color. (To the extent that the human case is different, it’s because of cognitive architecture constraints)
Regarding the zombie argument: The robots have uncertainty over the bridge laws. Under this uncertainty, they may believe it is possible that humans don’t have experiences, due to the bridge laws only identifying silicon brains as conscious. Then humans would be zombies. (They may have other theories saying this is pretty unlikely / logically incoherent / etc)
Basically, the robots have a primitive entity “my observations” that they explain using their theories. They have to reconcile this with the eventual conclusion they reach that their observations are those of a physically instantiated mind like other minds, and they have degrees of freedom in which things they consider “observations” of the same type as “my observations” (things that could have been observed).
As a qualia denier, I sometimes feel like I side more with the Chalmers side of the argument, which at least admits that there’s a strong intuition for consciousness. It’s not that I think that the realist side is right, but it’s that I see the naive physicalists making statements that seem to completely misinterpret the realist’s argument.
I don’t mean to single you out in particular. However, you state that Mary’s room seems uninteresting because Mary is able to predict the “bit pattern” of color qualia. This seems to me to completely miss the point. When you look at the sky and see blue, is it immediately apprehensible as a simple bit pattern? Or does it at least seem to have qualitative properties too?
I’m not sure how to import my argument onto your brain without you at least seeing this intuition, which is something I considered obvious for many years.
There is a qualitative redness to red. I get that intuition.
I think “Mary’s room is uninteresting” is wrong; it’s uninteresting in the case of robot scientists, but interesting in the case of humans, in part because of what it reveals about human cognitive architecture.
I think in the human case, I would see Mary seeing a red apple as gaining in expressive vocabulary rather than information. She can then describe future things as “like what I saw when I saw that first red apple”. But, in the case of first seeing the apple, the redness quale is essentially an arbitrary gensym.
I suppose I might end up agreeing with the illusionist view on some aspects of color perception, then, in that I predict color quales might feel like new information when they actually aren’t. Thanks for explaining.
I am curious if you disagree with the claim that (human) Mary is gaining implicit information, in that (despite already knowing many facts about red-ness), her (human) optic system wouldn’t have successfully been able to predict the incoming visual data from the apple before seeing it, but afterwards can?
That does seem right, actually.
Now that I think about it, due to this cognitive architecture issue, she actually does gain new information. If she sees a red apple in the future, she can know that it’s red (because it produces the same qualia as the first red apple), whereas she might be confused about the color if she hadn’t seen the first apple.
I think I got confused because, while she does learn something upon seeing the first red apple, it isn’t the naive “red wavelengths are red-quale”, it’s more like “the neurons that detect red wavelengths got wired and associated with the abstract concept of red wavelengths.” Which is still, effectively, new information to Mary-the-cognitive-system, given limitations in human mental architecture.
A physicist might discover that you can make computers out of matter. You can make such computers produce sounds. In processing sounds “homonym” is a perfectly legimate and useful concept. Even if two words are stored in far away hardware locations knowing that they will “sound detection clash” is important information. Even if you slice it a little differently and use different kinds of computer architechtures it woudl still be a real phenomenon.
In technical terms there might be the issue whether its meaningful to differntiate between founded concepts and hypothesis. If hypotheses are required then you could have a physicist that didn’t ever talk about temperature.
It seems to me that you are trying to recover the properties of conscious experience in a way that can be reduced to physics. Ultimately, I just feel that this approach is not likely to succeed without radical revisions to what you consider to be conscious experience. :)
Generally speaking, I agree with the dualists who argue that physics is incompatible with the claimed properties of qualia. Unlike the dualists, I see this as a strike against qualia rather than a strike against physics. David Chalmers does a great job in his articles outlining why conscious properties don’t fit nicely in our normal physical models.
It’s not simply that we are awaiting more data to fill in the details: it’s that there seems to be no way even in principle to incorporate conscious experience into physics. Physics is just a different type of beast: it has no mental core, it is entirely made up of mathematical relations, and is completely global. Consciousness as it’s described seems entirely inexplicable in that respect, and I don’t see how it could possibly supervene on the physical.
One could imagine a hypothetical heaven-believer (someone who claimed to have gone to heaven and back) listing possible ways to incorporate their experience into physics. They could say,
On the other hand, a skeptic could reply that:
Even if mind reading technology isn’t good enough yet, our best models say that humans can be described as complicated computers with a particular neural network architecture. And we know that computers can have bugs in them causing them to say things when there is no logical justification.
Also, we know that computers can lack perfect introspection so we know that even if it is utterly convinced that heaven is real, this could just be due to the fact that the computer is following its programming and is exceptionally stubborn.
Heaven has no clear interpretation in our physical models. Yes, we could see that a supervenience is possible. But why rely on that hope? Isn’t it better to say that the belief is caused by some sort of internal illusion? The latter hypothesis is at least explicable within our models and doesn’t require us to make new fundamental philosophical advances.
It seems that doubting that we have observations would cause us to doubt physics, wouldn’t it? Since physics-the-discipline is about making, recording, communicating, and explaining observations.
Why think we’re in a physical world if our observations that seem to suggest we are are illusory?
This is kind of like if the people saying we live in a material world arrived at these theories through their heaven-revelations, and can only explain the epistemic justification for belief in a material world by positing heaven. Seems odd to think heaven doesn’t exist in this circumstance.
(Note, personally I lean towards supervenient neutral monism: direct observation and physical theorizing are different modalities for interacting with the same substance, and mental properties supervene on physical ones in a currently-unknown way. Physics doesn’t rule out observation, in fact it depends on it, while itself being a limited modality, such that it is unsurprising if you couldn’t get all modalities through the physical-theorizing modality. This view seems non-contradictory, though incomplete.)
You seem to have similar characteristic in your beliefs I encountered on less wrong before.
https://www.lesswrong.com/posts/TniCuWCDxQeqFSxut/arguments-for-the-existence-of-qualia-1?commentId=Zwyh8Xt5uaZ4ZBYbP
There is the phenomenon of qualia and then there is the ontological extension. The word does not refer to the ontological extension.
It would be like explaining lightning with lightning. Sure when we dig down there are non-lightning parts. But lightning still zaps people.
Or it would be a category error like saying that if you can explain physics without coordinates by only positing that energy exists you should drop coordinates from your concepts. But coordinates are not a thing to believe in, it’s a conceptual tool to specify claims not a hypothesis in itself. When physists believe in a particular field theory they are not agreeing with the greek philosphers that think that the world is made of a type of number.
My basic claim is that the way that people use the word qualia implicitly implies the ontological extensions. By using the term, you are either smuggling these extensions in, or you are using the term in a way that no philosopher uses it. Here are some intuitions:
Qualia are private entities which occur to us and can’t be inspected via third person science.
Qualia are ineffable; you can’t explain them using a sufficiently complex English or mathematical sentence.
Qualia are intrinstic; you can’t construct a quale if you had the right set of particles.
etc.
Now, that’s not to say that you can’t define qualia in such a way that these ontological extensions are avoided. But why do so? If you are simply re-defining the phenomenon, then you have not explained anything. The intuitions above still remain, and there is something still unexplained: namely, why people think that there are entities with the above properties.
That’s why I think that instead, the illusionist approach is the correct one. Let me quote Keith Frankish, who I think does a good job explaining this point of view,
In the case of lightning, I think that the first approach would be correct, since lightning forms a valid physical category under which we can cast our scientific predictions of the world. In the case of the orbit of Uranus, the second approach is correct, since it was adequately explained by appealing to understood Newtonian physics. However, the third approach is most apt for bizarre phenomena that seem at first glance to be entirely incompatible with our physics. And qualia certainly fit the bill in that respect.
When I say “qualia” I mean individual instances of subjective, conscious experience full stop. These three extensions are not what I mean when I say “qualia”.
Not convinced of this. There are known neural correlates of consciousness. That our current brain scanners lack the required resolution to make them inspectable does not prove that they are not inspectable in principle.
This seems to be a limitation of human language bandwidth/imagination, but not fundamental to what qualia are. Consider the case of the conjoined twins Krista and Tatiana, who share some brain structure and seem to be able “hear” each other’s thoughts and see through each other’s eyes.
Suppose we set up a thought experiment. Suppose that they grow up in a room without color, like Mary’s room. Now knock out Krista and show Tatiana something red. Remove the red thing before Krista wakes up. Wouldn’t Tatiana be able to communicate the experience of red to her sister? That’s an effable quale!
And if they can do it, then in principle, so could you, with a future brain-computer interface.
Really, communicating at all is a transfer of experience. We’re limited by common ground, sure. We both have to be speaking the same language, and have to have enough experience to be able to imagine the other’s mental state.
Again, not convinced. Isn’t your brain made of particles? I construct qualia all the time just by thinking about it. (It’s called “imagination”.) I don’t see any reason in principle why this could not be done externally to the brain either.
The Tatiana and krista experiment is quite interesting but stretches the concept of communication to it’s limits. I am inclined to say that having a shared part of your conciousness is not communication in the same way that sharing a house is not traffic. It does strike me that communication involves directed construction of thoughts and it’s easy to imagine that the scope of what this construction is capable would be vastly smaller than what goes on in the brain in other processes. Extending the construction to new types of thoughts might be a soft border rather than a hard one. With enough verbal sentences it should be in principle to be able to reconstruct an actual graphical image, but even with overtly descriptive prose this level is not really reached (I presume) but remains within the realm of sentence-like data structures.
In the example Tatiana directs the visual cortex and Krista can just recall the representation later. But in a single conciouness brain nothing can be made “ready” but it must be assembled by the brain itself from sensory inputs. That is cognitive space probably has small funnels and for signficant objects they can’t travel them as themselfs but must be chopped off into pieces and reassembled after passing the tube.
Let’s extend the thought experiment a bit. Suppose technology is developed to separate the twins. They rely on their shared brain parts for vital functions, so where we cut nerve connections we replace them with a radio transceiver and electrode array in each twin.
Now they are communicating thoughts via a prosthesis. Is that not communication?
Maybe you already know what it is like to be a hive mind with a shared consciousness, because you are one: cutting the corpus callosum creates a split-brained patient that seems to have two different personalities that don’t always agree with each other. Maybe there are some connections left, but the bandwidth has been drastically reduced. And even within hemispheres, the brain seems to be composed of yet smaller modules. Your mind is made of parts that communicate with each other and share experience, and some of it is conscious.
I think the line dividing individual persons is a soft one. A sufficiently high-bandwidth communication interface can blur that boundary, even to the point of fusing consciousness like brain hemispheres. Shared consciousness means shared qualia, even if that connection is later severed, you might still remember what it was like to be the other person. And in that way, qualia could hypothetically be communicated between individuals, or even species.
If you would copy my brain but make it twice as large that copy would be as “lonely” as I would be and this would remain after arbitrary doublings. Single individuals can be extended in space without communicating with other individuals.
The “extended wire” thought experiement doesn’t specify enough how that physical communication line is used. It’s plausible that there is no “verbalization” process like there is an step to write an email if one replaces sonic communication with ip-packet communication. With huge relative distance would come speed of light delays, if one twin was on earth and another on the moon there would be a round trip latency of seconds which probably would distort how the combined brain works. (And I guess with doublign in size would need to come with proportionate slowing to have same function).
I think there is a difference between a information system being spatially extended and having two information systems interface with each other. Say that you have 2 routers or 10 routers on the same length of line. It makes sense to make a distinction that each routers functions “independently” even if they have to be able to suggest each other enough that packets flow throught. To the first router the world “downline” seems very similar whether or not intermediate routers exist. I don’t count information system internal processing as communicating thus I don’t count “thinking” into communicating. Thus the 10 router version does more communicating than the 2 router version.
I think the “verbalization” step does mean that even highbandwidth connection doesn’t automatically mean qualia sharing. I am thinking of plugings that allow programming languages to share code. Even if there is a perfect 1-to-1 compatibility between the abstractions of the languages I think still each language only ever manipulates their version of that representation. Cross-using without translation would make it illdefined what would be correct function but if you do translation then it loses the qualities of the originating programming language. A C sharp integer variable will never contain a haskel integer even if a C sharp integer is constructed to represent the haskel integer. (I guess it would be possible to make a super-language that has integer variables that can contain haskel-integers and C-sharp integers but that language would not be C sharp or haskel). By being a spesific kind of cognitive architechture you are locked into certain representation types which are unescaable outside of turning into another kind ot architechture.
I am assuming that the twins communicating thoughts requires an act of will like speaking does. I do have reasons for this. Watching their faces when they communicate thoughts makes it seem voluntary.
But most of what you are doing when speaking is already subconscious: One can “understand” the rules of grammar well enough to form correct sentences on nearly all attempts, and yet be unable to explain the rules to a computer program (or to a child or ESL student). There is an element of will, but it’s only an element.
It may be the case that even with a high-bandwidth direct-brain interface it would take a lot of time and practice to understand another’s thoughts. Humans have a common cognitive architecture by virtue of shared genes, but most of our individual connectomes are randomized and shaped by individual experience. Our internal representations may thus be highly idiosyncratic, meaning a direct interface would be ad-hoc and only work on one person. How true this is, I can only speculate without more data.
In your programming language analogy, these data types are only abstractions built on top of a more fundamental CPU architecture where the only data types are bytes. Maybe an implementation of C# could be made that uses exactly the same bit pattern for an int as Haskell does. Human neurons work pretty much the same way across individuals, and even cortical columns seem to use the same architecture.
I don’t think the inability to communicate qualia is primarily due to the limitation of language, but due to the limitation of imagination. I can explain what a tesseract is, but that doesn’t mean you can visualize it. I could give you analogies with lower dimensions. Maybe you could understand well enough to make a mental model that gives you good predictions, but you still can’t visualize it. Similarly, I could explain what it’s like to be a tetrachromat, how septarine and octarine are colors distinct from the others, and maybe you can develop a model good enough to make good predictions about how it would work, but again you can’t visualize these colors. This failing is not on English.
Sure the difference between hearing about a tesseract and being able to visualise it is significant but I think the difference might not be an impossibility barrier but just skill level of imagination.
Having learned some echolocation my qualia involved in hearing have changed and it makes it seem possible to be able to make a similar transition from a trichromat visual space into a tetrachromat visual space. The weird thing about it is that my ear receives as much information that it did before but I just pay attention to it differently. Having deficient understanding in the sense of getting things wrong is easy line to draw. But it seems at some point the understanding becomes vivid instead of theorethical.
I’m pretty sure that’s not what “intrinisc” is supposed to mean. From “The Qualities of Qualia” by David de Leon.
I find it important in philosophy to be on the clear what you mean. It is one thing to explain and another to define what you mean. You might point to a yellow object and say yellow and somebody that misunderstood might think that you mean “roundness” by yellow. The accuracy is most important when the views are radical and talk in very different worlds. And “disproving” yellow by not being able to pick it out from ostensive differentation is not an argumentative victory but a communicative failure.
Even if we use some other term I think that meaning is important to have. “Plogiston” might sneak in claims but that is just the more reason to have terms that have as little room for smuggling as possible. And we still need good terms to talk about burning. “oxygen” literally means “black maker” but we nowadays understand it as a term to refer to a element which has definitionally very little to do with the color black.
I think the starting point that generated the word refers to a genuine problem. Having qualia in category three would mean that you claim that I do not have experiences. And if qualia is a bad loaded word to refer to the thing to be explained it would be good to make up a new term that refers to that. But to me qualia was just that word. I word like “dark matter” might experience similar “highjack pressure” by having wild claims thrown around about it. And there having things like “warm dark matter”, “wimpy dark matter” makes the classification more fine making the conceptual analysis proceed. But requirements of clear thinking are different from tradition preservance. If you say that “warm dark matter” can’t be the answer the question of dark matter still stands. Even if you succesfully argue that “qualia” can’t be a attractive concept the issue of me not being a p-zombie still remains and it would be expected that some theorethical bending over backwards would happen.
That argument has an inverse: “If we are able to explain why you believe in, and talk about an external without referring to an external world whatsoever in our explanation, then we should reject the existence of an external world as a hypothesis”.
People want reductive explanation to be unidirectional,so that you have an A and a B, and clearly it is the B which is redundant and can be replaced with A. But not all explanations work in that convenient way...sometimes A and B are mutually redundant, in the sense that you don’t need both.
The moral of the story being to look for the overall best explanation, not just eliminate redundancy.
It’s a strong argument, but there are strong arguments on the other side as well.