Transhumanism and the denotation-connotation gap

A word’s denotation is our conscious definition of it. You can think of this as the set of things in the world with membership in the category defined by that word; or as a set of rules defining such a set. (Logicians call the former the category’s extension into the world.)

A word’s connotation can mean the emotional coloring of the word. AI geeks may think of it as a set of pairs, of other concepts that get activated or inhibited by that word, and the changes to the odds of recalling each of those concepts.

When we think analytically about a word—for instance, when writing legislation—we use its denotation. But when we are in values/​judgement mode—for instance, when deciding what to legislate about, or when voting—we use its denotation less and its connotation more.

This denotative-connotative gap can cause people to behave less rationally when they become more rational. People who think and act emotionally are at least consistent. Train them to think analytically, and they will choose goals using connotation but pursue them using denotation. That’s like hiring a Russian speaker to manage your affairs because he’s smarter than you, but you have to give him instructions via Google translate. Not always a win.

Consider the word “human”. It has wonderful connotations, to humans. Human nature, humane treatment, the human condition, what it means to be human. Often the connotations are normative rather than descriptive; behaviors we call “inhumane” are done only by humans. The denotation is bare by comparison: Featherless biped. Homo sapiens, as defined by 3 billion base pairs of DNA.

Some objections to transhumanism are actually objections to transhumanism. But some are caused by the denotative-connotative gap. A person’s analytic reasoner says, “What about this transhumanism thing, then?”, and their connotative reasoner replies, “Human good! Ergo, not-human bad! QED.”

I don’t mean that we can get around this by renaming “transhumanism” as “humanism with sprinkles!” This confusion over denotation and connotation happens inside another person’s head, and you can’t control it with labels. If you propose making a germline genetic modification, this will trigger thoughts about the definition of “human” in someone else’s head. When that person asks how they feel about this modification, they take the phrase “not human” chosen for its denotation, go into values mode, access its full connotation, attach the label “bad” to “not human”, and pass the result back to their analytic reasoner to decide what to do about it. Fixing a disease gene can get labelled “bad” because the connotative reasoner makes a judgement about a different concept than the analytic reasoner thinks it did.

I don’t think the solution to the d-c gap is to operate only in denotation mode. Denotation is what 1970s AI programs had. But we can try to be aware of the influence of connotations, and to prefer words that say what we mean over the overused and hence connotation-laden words that first spring to mind. Connotation isn’t a bad thing—it’s part of what makes us vertebrate, after all.