Why it’s so hard to talk about Consciousness

[Thanks to Charlie Steiner, Richard Kennaway, and Said Achmiz for helpful discussion. Extra special thanks to the Long-Term Future Fund for funding research related to this post.]

[Epistemic status: my best guess after having read a lot about the topic, including all LW posts and comment sections with the consciousness tag]

There’s a common pattern in online debates about consciousness. It looks something like this:

One person will try to communicate a belief or idea to someone else, but they cannot get through no matter how hard they try. Here’s a made-up example:


“It’s obvious that consciousness exists.”

-Yes, it sure looks like the brain is doing a lot of non-parallel processing that involves several spatially distributed brain areas at once, so-

“I’m not just talking about the computational process. I mean qualia obviously exist.”

-Define qualia.

“You can’t define qualia; it’s a primitive. But you know what I mean.”

-I don’t. How could I if you can’t define it?

“I mean that there clearly is some non-material experience stuff!”

-Non-material, as in defying the laws of physics? In that case, I do get it, and I super don’t-

“It’s perfectly compatible with the laws of physics.”

-Then I don’t know what you mean.

“I mean that there’s clearly some experiential stuff accompanying the physical process.”

-I don’t know what that means.

“Do you have experience or not?”

-I have internal representations, and I can access them to some degree. It’s up to you to tell me if that’s experience or not.

“Okay, look. You can conceptually separate the information content from how it feels to have that content. Not physically separate them, perhaps, but conceptually. The what-it-feels-like part is qualia. So do you have that or not?”

-I don’t know what that means, so I don’t know. As I said, I have internal representations, but I don’t think there’s anything in addition to those representations, and I’m not sure what that would even mean.


and so on. The conversation can also get ugly, with boldface author accusing quotation author of being unscientific and/​or quotation author accusing boldface author of being willfully obtuse.

On LessWrong, people are arguably pretty good at not talking past each other, but the pattern above still happens. So what’s going on?

The Two Intuition Clusters

The basic model I’m proposing is that core intuitions about consciousness tend to cluster into two camps, with most miscommunication being the result of someone failing to communicate with the other camp. For this post, we’ll call the camp of boldface author Camp #1 and the camp of quotation author Camp #2.

Characteristics

Camp #1 tends to think of consciousness as a non-special high-level phenomenon. Solving consciousness is then tantamount to solving the Meta-Problem of consciousness, which is to explain why we think/​claim to have consciousness. (Note that this means explaining the full causal chain in terms of the brain’s physical implementaton.) In other words, once we’ve explained why people keep uttering the sounds kon-shush-nuhs, we’ve explained all the hard observable facts, and the idea that there’s anything else seems dangerously speculative/​unscientific. No complicated metaphysics is required for this approach.

Conversely, Camp #2 is convinced that there is an experience thing that exists in a fundamental way. There’s no agreement on what this thing is – some postulate causally active non-material stuff, whereas others agree with Camp #1 that there’s nothing operating outside the laws of physics – but they all agree that there is something that needs explaining. Moreover, even if consciousness is compatible with the laws of physics, it still poses a conceptual mystery relative to our current understanding. A complete solution (if it is even possible) may also have a nontrivial metaphysical component.

The camps are ubiquitous; once you have the concept, you will see it everywhere consciousness is discussed. Even single comments often betray allegiance to one camp or the other. Apparent exceptions are usually from people who are well-read on the subject and may have optimized their communication to make sense to both sides.

The Generator

With the description out the way, let’s get to the interesting question: why is this happening? I don’t have a complete answer, but I think we can narrow down the disagreement. Here’s a somewhat indirect explanation of the proposed crux.

Suppose your friend John tells you he has a headache. As an upstanding citizen Bayesian agent, how should you update your beliefs here? In other words, what is the explanandum – the thing-your-model-of-the-world-needs-to-explain?

You may think the explanandum is “John has a headache”, but that’s smuggling in some assumptions. Perhaps John was lying about the headache to make sure you leave him alone for a while! So a better explanandum is “John told me he’s having a headache”, where the truth value of the claim is unspecified.

(If we want to get pedantic, the claim that John told you anything is still smuggling in some assumptions since you could have also hallucinated the whole thing. But this class of concerns is not what divides the two camps.)

Okay, so if John tells you he has a headache, the correct explanandum is “John claims to have a headache”, and the analogous thing holds for any other sensation. But what if you yourself seem to experience something? This question is what divides the two camps:

  • According to Camp #1, the correct explanandum is only slightly more than “I claim to have experienced X” (where X is the apparent experience). After all, if we can explain exactly why you, as a physical system, uttered the words “I experienced X”, then there’s nothing else to explain. The reason it’s slightly more is that you do still have some amount of privileged access to your own experience: a one-sentence testimony doesn’t communicate the full set of information contained in a subjective state – but this additional information remains metaphysically non-special. (HT: wilkox.)

  • According to Camp #2, the correct explanandum is “I experienced X”. After all, you perceive your experience/​consciousness directly, so it is not possible to be wrong about its existence.

In other words, the two camps disagree about the epistemic status of apparently perceived experiences: for Camp #2, they’re epistemic bedrock, whereas for Camp #1, they’re model outputs of your brain, and like all model outputs of your brain, they can be wrong. The axiom of Camp #1 can be summarized in one sentence as “you should treat your own claims of experience the same way you treat everyone else’s”.

From the perspective of Camp #1, Camp #2 is quite silly. People have claimed that fire is metaphysically special, then intelligence, then life, and so on, and their success rate so far is 0%. Consciousness is just one more thing on this list, so the odds that they are right this time are pretty slim.

From the perspective of Camp #2, Camp #1 is quite silly. Any apparent evidence against the primacy of consciousness necessarily backfires as it must itself be received as a pattern of consciousness. Even in the textbook case where you’re conducting a scientific experiment with a well-defined result, you still need to look at your screen (or other output device) to read the result, so even science bottoms out in predictions about future states of consciousness!

An even deeper intuition may be what precisely you identify with. Are you identical to your physical brain or body (or program/​algorithm implemented by your brain)? If so, you’re probably in Camp #1. Are you a witness of/​identical to the set of consciousness exhibited by your body at any moment? If so, you’re probably in Camp #2. That said, this paragraph is pure speculation, and the two-camp phenomenon doesn’t depend on it.

Representations in the literature

If you ask GPT-4 about the two most popular academic books about consciousness, it usually responds with

  1. Consciousness Explained by Daniel Dennett; and

  2. The Conscious Mind by David Chalmers.

If the camps are universal, we’d expect the two books to represent one camp each because economics. As it happens, this is exactly right!

Dennett devotes an entire chapter to the proper evaluation of experience claims, and the method he champions (called “heterophenomenology”) is essentially a restatement of the Camp #1 axiom. He suggests that we should treat experience claims like fictional worldbuilding, where such claims are then “in good standing in the fictional world of your heterophenomenology”. Once this fictional world is complete, it’s up to the scientist to evaluate how its components map to the real world. Crucially, you’re supposed to apply this principle even to yourself, so the punchline is again that the epistemic status of experience claims is always up for debate.

Conversely, Chalmers says this in the introductory chapter of his book (emphasis added):

Some say that consciousness is an “illusion,” but I have little idea what this could even mean. It seems to me that we are surer of the existence of conscious experience than we are of anything else in the world. I have tried hard at times to convince myself that there is really nothing there, that conscious experience is empty, an illusion. There is something seductive about this notion, which philosophers throughout the ages have exploited, but in the end it is utterly unsatisfying. I find myself absorbed in an orange sensation, and something is going on. There is something that needs explaining, even after we have explained the processes of discrimination and action: there is the experience.

True, I cannot prove that there is a further problem, precisely because I cannot prove that consciousness exists. We know about consciousness more directly than we know about anything else, so “proof” is inappropriate. The best I can do is provide arguments wherever possible, while rebutting arguments from the other side. There is no denying that this involves an appeal to intuition at some point; but all arguments involve intuition somewhere, and I have tried to be clear about the intuitions involved in mine.

In other words, Chalmers is having none of this heterophenomenology stuff; he wants to condition on “I experience X” itself.

Why it matters

While my leading example was about miscommunication, I think the camps have consequences in other areas as well, which are arguably more significant. To see why, suppose we

  • model the brain as a computational network; and

  • ask where consciousness is located in this network.

For someone in Camp #1, the answer has to be something like this:

I.e., consciousness is [the part of our brain that creates a unified narrative and produces our reports about “consciousness”].[1] So consciousness will be a densely connected part of this network – that is, unless you dispute that it’s even possible to restrict it to just a part of the network, in which case it’s more “some of the activity of the full network”. Either way, consciousness is identified with its functional role, which makes the concept inherently fuzzy. If we built an AI with a similar architecture, we’d probably say it also had consciousness – but if someone came along and claimed, “wait a minute, that’s not consciousness!”, there’d be no fact of the matter as to who is correct, any more than there’s a fact of the matter about the precise number of pebbles required to form a heap. The concept is inherently fuzzy, so there’s no right or wrong here.

Conversely, Camp #2 views consciousness as a precisely defined phenomenon. And if this phenomenon is causally responsible for our talking about it,[2] then you can see how this view suggests a very different picture: consciousness is now a specific thing in the brain (which may or may not be physically identifiable with a part of the network), and the reason we talk about it is that we have it – we’re reporting on a real thing.

These two views suggest substantially different approaches to studying the phenomenon – whether or not something has clear boundaries is an important property! So the camps don’t just matter for esoteric debates about qualia but also for attempts to reverse-engineer consciousness, and to a lesser extent, for attempts to reverse-engineer the brain...

… and also for morality, which is a case where the camps are often major players even if consciousness isn’t mentioned. Camp #2 tends to view moral value as mostly or entirely reducible to conscious states, an intuition so powerful that they sometimes don’t realize it’s controversial. But the same reduction is problematic for Camp #1 since consciousness is now an inherently fuzzy phenomenon – and there’s no agreed-upon way to deal with this problem. Some want to tie morality to consciousness anyway, which can arguably work under a moral anti-realist framework. Others deny that morality should be about consciousness to begin with. And some bite the bullet and accept that their views imply moral nihilism. I’ve seen all three views (plus the one from Camp #2) expressed on LessWrong.

Discussion/​Conclusions

Given the gulf between the two camps, how does one avoid miscommunication?

The answer may depend on which camp you’re in. For the reasons we’ve discussed, it tends to be easier for ideas from Camp #1 to make sense to Camp #2 than vice-versa. If you study the brain looking for something fuzzy, there’s no reason you can’t still make progress if the thing actually has crisp boundaries – but if you bake the assumption of crisp boundaries into your approach, your work will probably not be useful if the thing is fuzzy. Once again, we need only look at the two most prominent theories in the literature for an example of this. Global Workspace Theory is peak Camp #1 stuff,[3] but it tends to be at least interesting to most people in Camp #2. Integrated Information Theory is peak Camp #2 stuff,[4] and I’m yet to meet a Camp #1 person who takes it seriously. Global Workspace Theory is also the more popular one of the two, even though Camp #1 is supposedly in the minority among researchers.[5]

The same pattern seems to hold on LessWrong across the board: Consciousness Explained gets brought up a lot more than The Conscious Mind, Global Workspace Theory gets brought up a lot more than Integrated Information Theory, and most high karma posts (modulo those of Eliezer) are Camp #1 adjacent – even though there are definitely a lot of Camp #2 people here. Kaj Sotala’s Multi Agent Models of Mind series is a particularly nice example of a Camp #1 idea[6] with cross-camp appeal, and there’s nothing analogous out of Camp #2.

So if you want to share ideas about this topic, it’s probably a good idea to be in Camp #1. If that’s not possible, I think just having a basic understanding of how ~half your audience thinks is helpful. There are a lot of cases where asking, “does this argument make sense to people with the other epistemic starting point?” is all you need to avoid the worst misunderstandings.

You can also try to convince the other side to switch camps, but this tends to work only around 0% of the time, so it may not be the best practical choice.


  1. ↩︎

    This doesn’t mean anything that claims to be conscious is conscious. Under this view, consciousness is about the internal organization of the system, not just about its output. After all, a primitive chatbot can be programmed to make arbitrary claims about consciousness.

  2. ↩︎

    This assumption is not trivial. For example, David Chalmers’ theory suggests that consciousness has little to no impact on whether we talk about it. The class of theories that model consciousness as causally passive is called epiphenomenalism.

  3. ↩︎

    Global Workspace Theory is an umbrella term for a bunch of high-level theories that attempt to model the observable effects of consciousness under a computational lens.

  4. ↩︎

    Integrated Information theory holds that consciousness is identical to the integrated information of a system, modeled as a causal network. There are precise rules to determine which part(s) of a network are conscious, and there is a scalar quantity called (“big phi”) that determines the amount of consciousness of a system, as well as a much more complex object (something like a set of points in high-dimensional Euclidean space) that determines its character.

  5. ↩︎

    According to David Chalmer’s book, the proportion skews about 23 vs. 13 in favor of Camp #2, though he provides no source for this, merely citing “informal surveys”. The phenomenon he describes isn’t exactly the same as the two-camp model, but it’s so similar that I expect high overlap.

  6. ↩︎

    I’m calling it a Camp #1 idea because Kaj defines consciousness as synonymous with attention for the purposes of the sequence. Of course, this is just a working definition.