What emotions would AIs need to feel?

[this post is completely fluffy and superficial]

When talking about AIs, the issue of emotions often comes up. I normally finesse the question by saying that we don’t really know if AIs would have things we would recognise and emotions, and would you stop pestering me, I have some serious and scary points to make.

But… If we take the question seriously for a second, and allow ourselves to speculate… The main angle of attack is to wonder: what are emotions for, what functions do they serve? And would they serve the same functions in an AI?

So, here’s a speculative cross-section of emotions, and whether AIs are likely to have versions of them:

  • Very likely: Joy, Sadness, Boredom, Empathy, Interest

At their simplest, joy and sadness are responses to the situation turning out better or worse than expected. These are essential to learning, and the AI would certainly have some version of them. Similarly, boredom is a sign that the AI should do more exploration of its environment, empathy is useful for understanding other agents, and interest is a sign that a certain area should be investigated in priority.

  • Possible, if the AI is limited or non-transparent: Anger, Kindness, Cruelty, Trust, Distrust, Surprise

If the AI is able to credibly pre-commit to certain actions (eg if attacked, it will retaliate), then some of these emotions won’t be needed. But it might not be able to credibly signal that, in which case anger would be useful: other agents will be less inclined to exploit it, lest it react aggressively (note that giving in to anger is generally a disadvantage, but being known to give in to anger is often an advantage). Kindness and cruelty are similarly tools for managing social interactions; if the AI is transparent and can pre-commit, they won’t be needed.

Empathy is the key AI emotion, but it’s possible that it may award special categories for other agents with especially positive or negative traits. Surprise, finally, is a signal that unlikely events are happening, and a re-assessment is needed; if the AI cannot implement its assessments and reassessments in a smooth way, it may need some threshold of surprise to trigger a reaction.

  • Possible, if the AI has a physical body: Fear, Disgust, Anxiety, Feelings of Security

These emotions are all connected with protecting our own physical body, so an AI would only feel them if it had a physical body itself, and couldn’t just think through the issues dispassionately.

  • Unlikely for an AI to feel: Love, Shame, Pride, Envy, Indignation, Excitement, Pity

These are very specific to the human condition, and the human social circumstances, so they are unlikely to be felt by generic AIs. Now, it’s always possible for AIs to feel these, but there’s no necessary reason for them to feel them, as far as we known now.

Finally, we should consider that AI’s could have emotions that we never had, because humans were never in circumstances where we would use them. What do you think these might be like? I’ll present two speculative ideas: first of all, AIs should have an intuitive grasp of statistics and probability that is much better than our own, so they may have statistical instincts. Secondly, if the AI gets copied a lot, self-loyalty – loyalty to copies and slightly-divergent-copies – may become useful, and instinctive ways of resolving debates with itself may become common.