Can an AI Have Feelings? or that satisfying crunch when you throw Alexa against a wall

This is an exploratory article on the nature of emotions and how that relates to AI and qualia. I am not a professional AI or ML researcher, and I approach the issue as a philosopher. I am here to learn. Rebuttals and clarifications are strongly encouraged.

Prior reading (or at least skimming).

https://​​plato.stanford.edu/​​entries/​​emotion/​​#ThreTradStudEmotEmotFeelEvalMoti

In the reading “Traditions in the Study of Emotions” and “Concluding Remarks” are the most essential. There we can see the fault line emerge between what would have to be true for AI to have emotional experience. The emotions of humans and so-called emotions of AI have more differences than similarities.

--

The significance of this problem is that emotions, whatever they are, are essential to homo sapiens. I wanted to make the inquiry with something typically associated with humans, emotions, to see if we could shed any more light on AI Safety research by approaching from a different angle.

Human emotions have six qualities to them:

“At first blush, we can distinguish in the complex event that is fear an evaluative component (e.g., appraising the bear as dangerous), a physiological component (e.g., increased heart rate and blood pressure), a phenomenological component (e.g., an unpleasant feeling), an expressive component (e.g., upper eyelids raised, jaw dropped open, lips stretched horizontally), a behavioral component (e.g., a tendency to flee), and a mental component (e.g., focusing attention).” – SEP “Emotion” 2018

It is not clear which of these is prior to which and exactly how correlated they are. An AI would only actually “need” the evaluative component in order to do act well. Perhaps you could include the behavioral component for an AI as well if you consider it using heuristics, however a heuristic is a tool for evaluating information while a “tendency to flee” resulting from fear does not require any amount of reflection, only instinct. You might object that “it all takes place in the brain” and therefore is evaluative. But the brain is not a single processing system. The “primal” and instinctual parts of the brain (the amygdala) scream out to flee, but fear is not evidence in the same way that deliberation provides evidence. What we call rationality concerns becoming better at overriding the emotions in favor of heuristics, abstraction, and explicit reasoning.

An AI would need evaluative judgment, but it would not need a phenomenological component in order to motivate behavior, nor would it need behavioral tendencies which precede and sometimes jumpstart rational processing. It’s the phenomenological component where qualia/​consciousness would come in. It seems against the spirit of Occam’s Razor to say that because a machine can successfully imitate a feeling it has the feeling (assuming the feeling is a distinct event from the imitation of it.) (Notice I use the word feeling which indicates a subjective qualitative experience.) Of course, how could we know? The obvious fact is that we don’t have access to the qualitative experience of others. Induction from both my own experience and the study of biology/​ evolution tells me that humans and many animals have qualitative experience. I could go into more detail here if needed.

Using the same inductive process that allows me to consider fellow humans and my dog conscious, I may induce that an AI would not be conscious. I know that an AI passes integers to non-linear equations to perform calculations. As the calculations become more complex (usually thanks to additional computing power and nodes) and the ordering of the algorithms (the framework) become more sophisticated, ever more inputs can be evaluated and meaningful (to us) outputs can be outputted. At no point are physiological, phenomenological, or expressive components needed in order to motivate the process and move it along. If additional components are not needed, why posit they will develop?

If there are no emotions as motivations or feelings for an AI, an AGI should still be fully capable of doing anything and fooling anyone AND having horrific alignment problems, BUT it won’t have feelings.

However, if for some reason emotions are primarily evaluative, then we might expect emotion as motivation AND emotion as a feeling to emerge as a later consequence in AI. An interesting consequence of this view will be that it will be hardly possible to align an AGI. Here’s why: Imagine human brains as primarily evaluative machines. They are not that different from higher apes’. In fact, the biggest difference is that we can use complex language to coordinate and pass on discoveries to the next generation. However, our individual evaluative potential is extremely limited. Some people can do 1 differential equation in their head without paper. They are rare. In any case our motivations and consciousness is built on extremely weak evaluative power compared to even present artificial systems. The complexity of the motivations and consciousness that would emerge from such a future AGI would be as far beyond our comprehension as our minds are beyond a paramecium.

Summary of my thoughts:

Premises

a. Emotions are primarily either evaluations, feelings, or motivations.

b. Evaluations are power and are input/​output related. Motivations are instinctual. Feelings are qualitative, born out of consciousness, and probably began much later in evolutionary history, although some philosophers think even some very simple creatures have rudimentary subjective experience.

c. Carbon-based life forms which evolve into humans start with physiological and behavioral motivations, then much later develop expressive, mental, and an evaluative components. Somewhere in there the phenomenological component develops.

d. Computers and current AI evaluate without feeling or motivation.

The question: can an AI have feelings?

1. If emotions are not primarily based upon evaluations and evaluations do not cause consciousness,

2. Then evaluations of any complexity can exist without feelings,

3. And there is no AI consciousness.

OR

1. If emotions are based upon evaluations and evaluations of some as yet unknown type are the cause of consciousness,

2. Then evaluations of some complexity will cause feelings and motivations,

3. And given enough variations, there will be at least one AI consciousness.

LASTLY

1. If emotions are based upon evaluations, but evaluations which produce motivations and feeling require a brain with strange hierarchies caused by the differences in reaction of the parts of the brain,

2. Then those strange hierarchies are imitable, but this needs to be puzzled out more...

FUTURE INVESTIGATION

For future investigation I want to chart out the differences among types of brains, differences between types of ANN. One thing about ML which I find to be brutal is we are constantly describing the new thing in terms of the old thing. It’s very difficult to tell where our descriptors misleading us.