Sentience, Sapience, Consciousness & Self-Awareness: Defining Complex Terms

The terms in the title are commonly used in crucial debates surrounding morality & AI. Yet, I feel like there is no clear consensus about the meaning of those terms. The words are often used interchangeably, causing people to think they are all the same or very closely related. I believe they’re not. Clearly separating these terms makes it a lot easier to conceptualize a larger “spectrum of consciousness”.

Disclaimer: I expect some people to be upset for ‘taking’ terms and changing their definition. Feel free to propose different terms for the concepts below!

Consciousness

“Consciousness” is often taken to mean “what we are”. “Our” voice in our heads, the “soul”. I propose a more limited definition. A conscious entity is a system with an “internal observer”. At this very moment, these words are being read. Hello, ‘observer’! You probably have eyes. Focus on something. There is an image in your mind. Take the very core of that: not the intellectual observations connected to it, not the feelings associated with it, just the fact that a mental image exists. I think that is the unique ability of a conscious individual or system.

Sentience

Wikipedia claims that consciousness is sentience. Wiktionary has a definition for sentient that includes human-like awareness and intelligence. Once again, I propose a more limited definition. A sentient entity is a system that can experience feelings, like pleasure and pain. Consciousness is a prerequisite: without internal observer, there is nothing to experience these feelings.

I believe sentience is the bedrock of morality. Standing on a rock probably doesn’t generate any observations—certainly not pleasant or unpleasant ones. Standing on a cat seems to produce deeply unpleasant feelings for the cat. Defining morality as a system that tries to generate long-term positive experiences and to reduce negative experiences seems to work pretty well. In that case, standing on cats is not recommended.

In these terms, consciousness is not the threshold of morality. Perhaps we discover that rocks are conscious. When we stand on them, we slightly compress their structure and rocks somehow hold an internal observer that is aware of that. But it doesn’t have any feelings associated with that. It doesn’t experience pain or joy or fear or love. It literally doesn’t care what you do to it. It would be strange, it would overhaul our understanding of consciousness, it would redefine pet rocks—but it doesn’t make it immoral for us to stand on rocks.

Self-Awareness

Self-awareness is often seen as a big and crucial thing. Google “When computers become”, and “self aware” is the second suggestion. I believe self-awareness is vague and relatively unimportant. Does it mean “knowing you’re an entity separate from the rest of the world”? I think self-driving cars can check that box. Do you check that box when you recognize yourself in the mirror? Or do you need deep existential though and thorough knowledge of your subconsciousness and your relationship to the world and its history? In that case, many humans would fail that test.

I believe self-awareness is a spectrum with many, many degrees. It’s significantly correlated with intelligence, but not strongly or necessarily. Squirrels perform highly impressive calculations to navigate their bodies through the air, but these don’t seem to be “aware calculations”, and squirrels don’t seem exceptionally self-aware.

Sapience

According to Wikipedia:

Wisdom, sapience, or sagacity is the ability to contemplate and act using knowledge, experience, understanding, common sense and insight.

Sapience is closely related to the term “sophia” often defined as “transcendent wisdom”, “ultimate reality”, or the ultimate truth of things. Sapiential perspective of wisdom is said to lie in the heart of every religion, where it is often acquired through intuitive knowing. This type of wisdom is described as going beyond mere practical wisdom and includes self-knowledge, interconnectedness, conditioned origination of mind-states and other deeper understandings of subjective experience. This type of wisdom can also lead to the ability of an individual to act with appropriate judgement, a broad understanding of situations and greater appreciation/​compassion towards other living beings.

I find sapience to be much more interesting than self-awareness. Wikipedia has rather high ambitions for the term, and once again I propose a more limited definition. Biologically, we are all classified as homo sapiens. So it makes sense to me that “sapience” is the ability to understand and act with roughly human-level intelligence.

Here is a fascinating article about tricking GPT-3. It includes some very simple instructions like do not use a list format and use three periods rather than single periods after each sentence, which are completely ignored in GPT-3′s answer. GPT-3 copies the format of the text and elaborates in a similar style—exactly the thing the instructions told it not to do.

Human children can easily follow such instructions. AIs can’t, nor can animals like cats and dogs. Some animals seem to be able to transmit some forms of information, but this seems to be quite limited to things like “food /​ danger over there”. As far as I know, no animal builds up a complex, abstract model of the world and intimately shares that with others (exactly the thing we’re doing right now).

On the other hand: lots of animals do clearly build up complex models of the world in their minds. The ability to communicate them mainly seems to rely on language comprehension. Language is tremendously powerful, but should it be the threshold of ‘sapience’?

Language acquisition seems to be nearly binary. Children learn to speak rather quickly. Among healthy eight year old children, there is no separate category for children that speak “Part A” of English but not “Part B”.

It seems like raw human brain power + basic language skills = easy access to near infinite complex abstract models. And indeed, many concepts seem to be easily taught to a speaking child. “Store shoes there”, “don’t touch fire”, “wear a jacket when it’s cold outside”.

Simultaneously, relatively simple things can quickly become too complex for regular humans to communicate and handle. Take COVID for example. The virus itself is relatively simple—exponentional growth is not a new concept. Yet, our societies had and have a hard time properly communicating about it. We started mostly with fear and panic, and then transitioned to politicizing things like facemasks and vaccines, turning things into a predictable Greens vs Blues mess.

I can imagine beings with a communication method that comes as natural to them as talking about the weather comes to us, who can easily talk about subjects like those above, whose civilizations quickly and painlessly coordinate around COVID, and whose Twelfth Virtue is not “nameless” /​ “the void”. To those beings, humans might barely seem sapient, like bees and ants barely seem sapient to us.

So my definition of sapience would be something like the ability to function with broad, diverse and complex models of the world, and to appropriately share these models with others. It’s technically a spectrum, but basic language ability seems to be a massive jump upwards here.

The philosophical zombie is sapient, but not conscious. Self-awareness does seem to be fundamentally connected to sapience. Any functional model of the world requires the ability to “model yourself”. Proper sapience also seems to require the ability to be “receptive” towards the concept of self-awareness.

I think this results in the following map of possibilities:

I believe these definitions make discussing related subjects a lot more fruitful. I’d love to hear your opinion about it!