If you want something that’s more philosopher-ish, and a bit further from how I think about the topic today, here’s what I said to Geoff in 2014 (in part):
[...]
Phenomenal realism [i.e., the belief that we are phenomenally conscious] has lots of prima facie plausibility, and standard reductionism looks easily refuted by the hard problem. But my experience is that the more one shifts from a big-picture ‘is reductionism tenable?’ to a detailed assessment of the non-physicalist options, the more problems arise—for interactionism and epiphenomenalism alike, for panpsychism and emergent dualism alike, for property and substance and ‘aspect’ dualism alike, for standard fundamentalism and ‘reductionism-to-nonphysical-properties’ alike.
All of the options look bad, and I take that as a strong hint that there’s something mistaken at a very deep level about introspection, and/or about our concept of ‘phenomenal consciousness’. We’re clearly conscious in some sense—we have access consciousness, ‘awake’ consciousness, and something functionally similar to phenomenal consciousness (we might call it ‘functional consciousness,’ or zombie consciousness) that’s causally responsible for all the papers our fingers write about the hard problem. But the least incredible of the available options is that there’s an error at the root of our intuitions (or, I’d argue, our perception-like introspection). It’s not as though we have evolutionary or neuroscientific reasons to expect brains to be as good at introspection or phenomenological metaphysics as they are at perceiving and manipulating ordinary objects.
[...]
Eliminativism is definitely counter-intuitive, and I went through many views of consciousness before arriving at it. It’s especially intuitions-subverting to those raised on Descartes and the phenomenological tradition. There are several ways I motivate and make sense of eliminativism:
(I’ll assume, for the moment, that the physical world is causally closed; if you disagree in a way that importantly undermines one of my arguments, let me know.)
1. Make an extremely strong case against both reductionism and fundamentalism. Then, though eliminativism still seems bizarre—we might even be tempted to endorse mysterianism here—we at least have strong negative grounds to suspect that it’s on the right track.
2. Oversimplifying somewhat: reductionism is conceptually absurd, fundamentalism is metaphysically absurd (for the reasons I gave in my last e-mail), and eliminativism is introspectively absurd. There are fairly good reasons to expect evolution to have selected for brains that are good at manipulating concepts (so we can predict the future, infer causality, relate instances to generalizations, …), and good reasons to expect evolution to have selected for brains that are good at metaphysics (so we can model reality, have useful priors, usefully update them, …). So, from an outside perspective, we should penalize reductionism and fundamentalism heavily for violating our intuitions about, respectively, the implications of our concepts and the nature of reality.
The selective benefits of introspection, on the other hand, are less obvious. There are clear advantages to knowing some things about our brains—to noticing when we’re hungry, to reflecting upon similarities between a nasty smell and past nasty smells, to verbally communicating our desires. But it’s a lot less obvious that the character of phenomenal consciousness is something our ancestral environment would have punished people for misinterpreting. As long as you can notice the similarity-relations between experiences, their spatial and temporal structure, etc. -- all their functional properties—it shouldn’t matter to evolution whether or not you can veridically introspect their nonfunctional properties, since (ex hypothesi) it makes no difference whatsoever which nonfunctional properties you instantiate.
And just as there’s no obvious evolutionary reason for you to be able to tell which quale you’re instantiating, there’s also no obvious evolutionary reason for you to be able to tell that you’re instantiating qualia at all.
Our cognition about P-consciousness looks plausibly like an evolutionary spandrel, a side-effect shaped by chance neural processes and genetic drift. Can we claim a large enough confidence in this process, all things considered, to refute mainstream physics?
3. The word ‘consciousness’ has theoretical content. It’s not, for instance, a completely bare demonstrative act—like saying ‘something is going on, and whatever it is, I dub it [foo]‘, or ‘that, whatever it is, is [foo]‘. If ‘I’m conscious’ were as theory-neutral as all that, then absolutely anything could count equally well as a candidate referent—a hat, the entire physical universe, etc.
Instead, implicitly embedded within the idea of ‘consciousness’ are ideas about what could or couldn’t qualify as a referent. As soon as we build in those expectations, we leave the charmed circle of the cogito and can turn out to be mistaken.
4. I’ll be more specific. When I say ‘I’m experiencing a red quale’, I think there are at least two key ideas we’re embedding in our concept ‘red quale’. One is subjectivity or inwardness: P-consciousness, unlike a conventional physical system, is structured like a vantage point plus some object-of-awareness. A second is what we might call phenomenal richness: the redness I’m experiencing is that specific hue, even though it seems like a different color (qualia inversion, alien qualia) or none at all (selective blindsight) would have sufficed.
I think our experiences’ apparent inwardness is what undergirds the zombie argument. Experiences and spacetime regions seem to be structured differently, and the association between the two seems contingent, because we have fundamentally different mental modules for modeling physical v. mental facts. You can always entertain the possibility that something is a zombie, and you can always entertain the possibility that something (e.g., a rock, or a starfish) has a conscious inner life, without thereby imagining altering its physical makeup. Imagining that a rock could be on fire without changing its physical makeup seems absurd, because fire and rocks are in the same magisterium; and imagining that an experience of disgust could include painfulness without changing its phenomenal character seems absurd, because disgust and pain are in the same magisterium; but when you cross magisteria, anything goes, at least in terms of what our brains allow us to posit in thought experiments.
Conceptually, mind and matter operate like non-overlapping magisteria; but an agent could have a conceptual division like that without actually being P-conscious or actually having an ‘inside’ irreducibly distinct from its physical ‘outside’. You could design an AI like that, much like Chalmers imagines designing an AI that spontaneously outputs ‘I think therefore I am’ and ‘my experiences aren’t fully reducible to any physical state’.
5. Phenomenal richness, I think, is a lot more difficult to make sense of (for physicalists) than inwardness. Chalmers gestures toward some explanations, but it still seems hard to tell an evolutionary/cognitive story here. The main reframe I find useful here is to recognize that introspected experiences aren’t atoms; they have complicated parts, structures, and dynamics. In particular, we can peek under the hood by treating them as metacognitive representations of lower-order neural states. (E.g., the experience of pain perhaps represents somatic damage, but it also represents the nociceptors carrying pain signals to my brain.)
With representation comes the possibility of misrepresentation. Sentence-shaped representations (‘beliefs’) can misrepresent, when people err or are deluded; and visual-field-shaped representations (‘visual perceptions’) can misrepresent, when people are subject to optical illusions or hallucinations. The metacognitive representations (of beliefs, visual impressions, etc.) we call ‘conscious experiences’, then, can also misrepresent what features are actually present in first-order experiences.
Dennett makes a point like this, but he treats the relevant metarepresentations as sentence-shaped ‘judgments’ or ‘hunches’. I would instead say that the relevant metarepresentations look like environmental perceptions, not like beliefs.
When conscious experience is treated like a real object ‘grasped’ by a subject, it’s hard to imagine how you could be wrong about your experience—after all, it’s right there! But when I try to come up with a neural mechanism for my phenomenal judgments, or a neural correlate for my experience of phenomenal ‘manifestness’, I run into the fact that consciousness is a representation like any other, and can have representational content that isn’t necessarily there.
In other words, it is not philosophically or scientifically obligatory to treat the introspectible contents of my visual field as real objects I grasp; one can instead treat them as intentional objects, promissory notes that may or may not be fulfilled. It is a live possibility that human introspection : a painting of a unicorn :: phenomenal redness : a unicorn, even though the more natural metaphor is to think of phenomenal redness as the painting’s ‘paint’. More exactly, the analogy is to a painting of a painting, where the first painting mostly depicts the second accurately, but gets a specific detail (e.g., its saturation level or size) systematically wrong.
One nice feature of this perspective shift is that treating phenomenal redness as an intentional object doesn’t prove that it isn’t present; but it allows us to leave the possibility of absence open at the outset, and evaluate the strengths and weaknesses of eliminativism, reductionism, and fundamentalism without assuming the truth or falsity of any one at the outset.
If you want something that’s more philosopher-ish, and a bit further from how I think about the topic today, here’s what I said to Geoff in 2014 (in part):