The intelligence-sentience orthogonality thesis

You’re familiar with the orthogonality thesis. Goals and intelligence are more or less independent for one another. There’s no reason to suspect an intelligent AI will also be an aligned AI because it is intelligent.

Popular writing suggests to me there’s a need to clear up some other orthogonality theses as well. We could propose:

  • The intelligence-sentience orthogonality thesis

  • The sentience-agency orthogonality thesis

  • The self-reflection-sentience orthogonality thesis

What I mean by “sentience” and “intelligence”

In this article I generally use the word sentience to describe the state of experiencing qualia, or the property a mind has that means there is “something that it is like to be” that mind. I use that term because many people have conflicting ideas about what consciousness means. When I use the term “consciousness”, I try to be specific about what I mean, and typically I use the word while explaining other people’s ideas about it.

When I use the word “intelligence” I mean the ability to represent information abstractly and draw inferences from it to complete tasks, or at least to conceptualize how a task can be completed. This is close to the g factor being measured in intelligence tests, which measures how a human or other agent can recognize patterns and draw inferences about them. It’s also close to what we mean by “artificial intelligence”.

Mind, intelligence, self-awareness, and sentience as overlapping Euler sets

It might be helpful to represent each of concepts like mind, intelligence, self-awareness, agency, and sentience as overlapping sets on a Euler diagram. We might conceptualize an entity possessing any of the above to be a kind of mind. Self-awareness and agency plausibly must be forms of intelligence, whereas arguably there are simple qualia experiences that do not require intelligence—perhaps a pure experience of blueness or pain or pleasure does not require any soft of intelligence at all. Finally, we can imagine entities that have some or all of the above.

The Euler diagram shows specific examples of where various minds are positioned on the chart. I’m pretty low-confidence about many of these placements. The main point to take away from the diagram is that when thinking about minds one should get in the habit of imagining various capacities separately, and not assuming that all capacities of the human mind will go together in other minds.

The intelligence-sentience orthogonality thesis

In his book, Being You: The New Science of Consciousness, Anil Seth describes research by himself and others in the last 10 years about consciousness (by which he means more or less what I mean when I say “sentience”).

Seth correctly describes the intelligence-sentience orthogonality thesis, on two cumulative grounds that establish that intelligence is neither necessary nor sufficient for sentience. First, there’s not enough evidence for functionalism, the proposition that consciousness always arises out of a particular kinds of information processing—the kind humans do—regardless of the physical substrate of its hardware, whether that hardware is a biological brain, silicon, or anything else. Seth argues functionalism is a necessary condition for artificial intelligence. Second, Seth argues we don’t know whether the kind of information processing that intelligence is even sufficient to produce consciousness. The common but fallacious intuition that intelligent agents must be conscious comes from not much more than the fact humans are both conscious and intelligent, but that in itself isn’t evidence the concepts can’t be dissociated.

So far so good, and Seth puts the intelligence-sentience orthogonality thesis better than I could. In reality, there’s no reason to think an artificial general intelligence must be conscious. In fact—as Prof. Seth describes in his book—we don’t really know yet how consciousness arises in the brain, or why it developed, evolutionarily speaking. But although we don’t know how consciousness is generated, we have a pretty clear idea about how intelligence arises, at least artificially. Current AI systems aren’t quite general enough for most people to consider them as ‘general intelligence’, but in principle, they seem ready to do any task with the right form of enough training

We could distinguish a few different forms of the thesis, including strict intelligence-sentience orthogonality, possible intelligence-sentience orthogonality, general intelligence orthogonality, and epistemic orthogonality. I’ll describe each of these below.

Strict intelligence-sentience orthogonality

Strict intelligence-sentience orthogonality holds intelligence and sentience are completely unrelated and no form of intelligence leads to the development of sentience. Actually, this seems unreasonably strong, and I wouldn’t endorse it. Although neuroscientists’ ideas about the evolutionary origins of consciousness haven’t yet gone much beyond the level of speculation, it is clear to each of us that consciousness developed somehow in the process of our own evolution. Given the homology between human brains and mammal brains it seems a pretty sure bet mammals also have conscious experience similar to ours in many respects (based on other scientific research, consciousness seems likely to extend to fish and even shrimp, plausibly)

Although it has been suggested that consciousness was more of an evolutionary spandrel than an adaptive mechanism that improved evolutionary fitness, Damasio argues that it serves an evolutionary purpose—to help us organize homeostatic feedback into higher-order feelings and make deliberative responses to those feelings. In that sense, whatever intelligence is enabled by consciousness is clearly a violation of the strict thesis. If consciousness is more of an evolutionary spandrel, this is still a violation o the strict thesis, because it suggests that some kind of informatic organization that produces intelligence also produces consciousness.

Possible intelligence-sentience orthogonality

Possible intelligence-sentience orthogonality suggests it is possible to build any sort of intelligence without sentience arising. Let’s say some sorts of intelligence like proproceptive or homeostatic intelligence tend to relate to consciousness in organic life. In this account, you could replicate the functions of proprioception or homeostasis without absolutely requiring consciousness. This seems like a plausible, live possibility. Consider that computers have been doing all sorts of tasks humans do with conscious processes, right down to adding two numbers together, without obviously leveraging any kind of conscious processing. Consider that humans themselves can, with enough practice, add numbers together or play piano or do the right trigonometry required to throw a ball without being conscious of the processes they are engaging to perform the task they would, with less practice, require conscious processing to achieve.

General intelligence orthogonality

Perhaps something about human regulation of their homeostatic systems necessarily gives rise to conscious processing; there’s something in that process that would just be difficult or impossible to achieve without consciousness. It might be that consciousness is tied to a few specific processes, processes that are not required for general intelligence. It might still be possible that we can build systems that are generally intelligent by any reasonable metric that do not include those intrinsically conscious processes. In fact, given the rise in a level of general intelligence in large language models, it seems quite likely that a general AI could perform a large range of intelligent tasks, and if the few human functions that absolutely require intelligence are not on the list, we still get unconscious artificial general intelligence.

Epistemic orthogonality

In epistemic orthogonality (perhaps we could also call it agnostic orthogonality), it is unclear if a particular form of orthogonality holds, but from current evidence, we don’t know. In that case, we have orthogonality with respect to our current knowledge of intelligent systems and conscious entities, but not necessarily orthogonality in fact. We do seem to be faced with at least this level of orthogonality, for several reasons: we don’t know what computations are required for general intelligence; we don’t know what computations or substrates produce for consciousness; and we don’t know whether computations producing general intelligence do in fact inevitably produce consciousness.

Theoretical orthogonality

Sentience and self-awareness can be at the very least conceptually distinguished, and thus, are in theory orthogonal. Even if it isn’t clear empirically whether or not they are intrinsically linked, we ought to maintain a conceptual distinction between the two concepts, in order to form testable hypotheses about whether they are in fact linked, and in order to reason about the nature of any link.

Sentience-agency orthogonality

Sentience-agency orthogonality describes the state of affairs where an entity can be sentient without having any agency, and can be agentic without having any sentience. A person with locked-in syndrome who is unable to communicate with others has sentience, but not agency, because while they have conscious experience (as confirmed by people with locked-in syndrome who have learned to communicate) they’re completely unable to act within the world. Conversely, an AI controlling a Starcraft player has plenty of agency within the computer game’s environment, but no one thinks that implies the AI is sentient. This isn’t because the player is within a game environment; if we magically let that AI control robots moving in the physical world that did the things units on Starcraft could do, that IA would be agentic IRL, but it still wouldn’t have sentience. Instead, this is because the kind of intelligence necessary to produce agentic behavior isn’t the sort of intelligence necessary for sentience.

While making a decent argument for intelligence-sentience orthogonality, Seth actually tells his readers that AI takeover, while it “cannot be ruled out completely”, is “extremely unlikely”. It seems to me that he falls into the trap of ignoring sentience-agency orthogonality. He builds his case against AI existential risk purely on the intelligence sentience orthogonality, arguing that we really don’t know whether artificial general intelligence will be conscious:

Projecting into the future, the stated moonshot goal of many AI researchers is to develop systems with the general intelligence capabilities of a human being—so-called ‘artificial general intelligence’, or ‘general AI’. And beyond this point lies the terra incognita of post-Singularity intelligence. But at no point in this journey is it warranted to assume that consciousness just comes along for the ride. What’s more, there may be many forms of intelligence that deviate from the humanlike, complementing rather than substituting or amplifying our species-specific cognitive toolkit—again without consciousness being involved.

It may turn out that some specific forms of intelligence are impossible without consciousness, but even if this is so, it doesn’t mean that all forms of intelligence—once exceeding some as yet unknown threshold—require consciousness. Conversely, it could be that all conscious entities are at least a little bit intelligent, if intelligence is defined sufficiently broadly. Again, this doesn’t validate intelligence as the royal road to consciousness.

In other words, it may be possible to build a kind of artificial general intelligence without building in the sort of intelligence required for sentience. He doesn’t say why it conscious matters for AI take-off. The missing premise seems to be that without consciousness, there’s no agency. But this is wrong, because we’ve already explored how we can have agents, like computer players within computer games, that are agentic, but not conscious.

Self-awareness sentience orthogonality

Self-reflection or self-awareness is another property of minds that some people can attribute to consciousness, and perhaps to sentience. Self-awareness sentience orthogonality suggests that an intelligence can be self-aware without having conscious internal states, and can have conscious internal states without being self-aware. Self-awareness is the ability of an intelligence to model its own existence and at least something about its own behavior. Self-awareness is probably necessary for recursive self-improvement, because in order to self-improve, an intelligence needs to be able to manipulate its own processes. Arguably, Bing’s chatbot “Sydney” has included a limited form of self-awareness, because it can leverage information on the Internet to reason about the Bing chatbot. Self-awareness is similar-but-not-the same as situational awareness, in which an intelligence can reason about the context within which it is being run.

Neuroscience of consciousness researchers Dehaene, Lau, and Koudier (2017) distinguish two distinct cognitive definitions of “consciousness”, neither of which are really what I mean by “sentience”. The first form of consciousness is Global availability—the ability for a cognitive system to represent an object of thought, globally available to most parts of the system for verbal and non-verbal processing. The second form is self-monitoring—understanding the system’s own processing, and is also referred to by psychologists as “metacognition”. I argue that self-monitoring is entirely distinct from sentience. While we humans can have conscious experiences, or qualia, that include being conscious of ourselves and our own existence, it’s not clear that conscious experience of the cognitive process of self-monitoring—a kind of sentience—is necessary to do the self-monitoring task.

As evidence for this, consider that you’re able to do many things that you’re not consciously aware of. As I type this, I’m not conscious of all of the places my fingers move to type words on the page because over time my brain has automatized the process of typing to an extent I’m not aware of every single keystroke—particularly when I’m focusing on the semantic or intellectual meaning of my words rather than their spelling. This is a widespread feature of human existence: consider all the times you’re driving, or walking, or doing any number of tasks that require complex coordination of your own body, which nevertheless you are able to do while focused on other tasks.

Other relevant orthogonalities

Andrew Critch’s recent article Consciousness as a conflationary alliance term came out recently and does an excellent job describing a wider set of distinct phenomena that people call “consciousness”, including introspection, purposefulness, experiential coherence, holistic experience of complex emotions, experience of distinctive affective states, pleasure and pain, perception of perception, awareness of awareness, symbol grounding, proprioception, awakeness, alertness, detection of cognitive uniqueness, mind-location, sense of cognitive extent, memory of memory, and vestibular sense. I suspect that humans have qualia—conscious experience of—all of the above cognitive processes, and that as a result, when asked to define consciousness, people come up with one or more of the above items by way of example. They might sometimes affirmatively argue that one or more of these processes is the essence of consciousness. But all of these terms seem distinct and without a clear argument to the contrary, we ought to treat them as such.

Hopefully it should be clear to the reader that none of the above processes simply are sentience. Perhaps some argue that they are forms of sentience, and perhaps it’s meaningless to talk about conscious experience, or qualia that are entirely devoid of any content. Nevertheless, at least at the conceptual level, we ought to recognize each of the processes Andrew’s interviewees describe as “consciousness” as at most, specific forms of sentience. Most do not even rise to that level, because there are non-conscious forms of infomatic representation of, for instance, memory of memory, introspection, and mind-location present in fairly non-sophisticated computer programs.

Overview

After reading Andrew’s recent post, I considered whether this post was even necessary, as it covered a lot of ground that mine does, and his has the added benefit of being a form of empirical research describing what a sample of research participants will say about consciousness. Overall, the distinct idea I want to communicate is that even beyond the cognitive processes and forms of qualia that are sometimes described as consciousness itself, there are other properties of humans such as agency and intelligence that we very easily confuse as some single essence or entity, and that we need to think very systematically about these distinct concepts in order to avoid confusion. As we increasingly interact with machines that have one or more of these abilities or properties, we need to carefully distinguish between each.

All those forms of consciousness, intelligence, agency, and other processes are often bunched together as “mind”. I think one important reason for this is that humans have all the relevant processes, and we spend most of our lifetimes interacting with our selves and with other humans, and find little practical need in daily life to separate these concepts. People who have never thought about artificial intelligence or philosophy of mind may soon be called to vote on political decisions about regulation of artificial intelligence risk. Even seasoned researchers in psychology, philosophy of mind, or artificial intelligence do not always demonstrate the clarity of mind to separate distinct mind processes.

I believe it would be useful to develop a taxonomy of mind processes in order to clearly distinguish between them in our conversation about ourselves and about artificial intelligence and artificial minds. We might build on this to have much more intelligent conversations, within a research context, within this community, and within broader society, about the intelligences, the agents, and perhaps the minds that we seem to be on the cusp of creating. It’s possible a taxonomy like this exists in philosophy of mind, but I’m not aware of anything like it.