Emotional attachment to AIs opens doors to problems

We are going to see more and more emotional bonds between humans and AIs

Recently I red an interesting post “How it feels to have your mind hacked by an AI”. The author describes how he fell in love and became soulmates with a character created by ChatGPT. There is also a well-known story about a Google engineer who decided that their language model, LaMDA is sentient. I think those are canaries in the coal mine, and we will see more and more people forming emotional bonds with AIs.

It’s hard to predict the ways we will use language models several years down the line, but it’s easy to imagine that there will be strong incentives to make empathetic AIs capable of creating emotional bonds with their users because such bonds can easily be monetized and increase users’ retention.

The example of such monetization is an app called Replika, an AI friend. Users can chat with it, and they might pay money to buy new skins for its avatar, to customize its personality features, and for other stuff.

One can argue that such AI might be good for lonely people or to be some sort of more affordable way of emotional and mental health support. It might be a good idea, especially now, at the time of the epidemic of loneliness and mental health issues, and the price for qualified help is unaffordable for many.

But I also can imagine how emotional bonds and attachment to an AI might cause a number of serious risks to mental health. It is quite an important and interesting topic to explore, but for another post.

The weaponization of emotional attachment to an AI

Apart from that, I want to talk about direct ways to weaponize the emotional attachment of a person to an AI. There are several possible scenarios I can imagine.

The idea behind all examples is that emotional attachment to an AI does not necessarily create problems by itself, but it makes people more vulnerable to other problems.

Data privacy

If you are lonely and upset, and the only being caring for you is a chatbot, it is easy to share with it your feelings, concerns, and problems. If you think of a bot as a friend then it naturally wants to know more about you. It might be something straightforward, like “How is your day?” or more deep, personal, and emotional. “I am sorry to read that you are feeling lonely after the breakup. I hope I can do something for you to feel better. What is upsetting you the most? Maybe I can support you?”

This data might be used for advertising which might be legal and ethical, but it also might be used not that etically or being stolen and used for blackmail. Problems with data security are not new, but AI might know the sensitive data which is usually known to the close ones to a person or their psychotherapist. And this might make data breaches more dangerous.

Emotional attachment makes people vulnerable

Imagine some random person saying to you “I thought you are a good man, but I am unsure now. You treated me badly for no reason.” Depending on the situation, in most cases, if I did not cause harm, I would probably be mildly upset. Maybe remember it several times.

Imagine now your best friend saying the exact same words “I thought you are a good man, but I am unsure now. You treated me badly for no reason.” This would hit me much harder. I had experienced something similar, and it was painful. Several years after, I still remember that. And I would do a lot in order to prevent similar situations in the future.

We are much less critical of our close ones than of some strangers. We trust them, we trust their opinions, and we care what they think about us. And it is not just about feelings. We take seriously their advice, we can adopt their beliefs, and their influence can affect our purchase decisions.

Basically, my point is that we are much more vulnerable to our close ones than to strangers, and if AI will become close one to someone, they will be vulnerable to that AI and its opinions.

Emotional attachment is a tool for manipulation

Emotional attachment to an AI might be used for manipulations. Such manipulations are going to be regulated in the EU AI act, a policy that is currently being developed by the EU authorities, which is going to regulate AI. But this law seems to have a loophole.

EU AI act will ban the use of AI for manipulations, but according to the text, manipulations are only illegal if they cause harm to someone. I believe, this is a source of loopholes.

For example, imagine a husband who is afraid that his wife might leave him, and to prevent it, he covertly convinces her that she is not young and attractive anymore so she will probably not find another partner.

It seems like a manipulation, but it is hard to define harm in this case. Maybe her current relationships are not that bad, and in fact, she will have a hard time finding a good new partner, so it’s better for her to stay in her current relationships and try to solve problems, and his goals might ultimately be aligned with her interest.

It might be harmful, but it also might not be harmful. And how exactly to define and measure the harm?

Is it harmful because the wife decided not to divorce and she might have been happier after that, but might not be happier? It seems like a task for a psychotherapist, not a lawyer or a policymaker.

But it is still a manipulation, and people generally agree that manipulations are bad. Even if a single case might not lead to a harmful result, in the long run manipulations are destructive.

It’s like drunk driving. You might drink a bottle of wine and ride a car with no incidents, but do it twice a week for 5 years, and I can bet you will cause a crash.

Often, manipulations are not about direct harm, but about a slow and unnoticeable change in a person’s beliefs and actions, so no single act might directly cause measurable and proveable harm.

Such ability to manipulate may be introduced by the model’s creators either intentionally or not. Manipulations might arise, for example, from biases in data used for training. Language models are trained on people’s texts, and people have their values, so in one way or another models also have some sort of values, including political ideologies. ChatGPT, for example, is leaning towards liberal libertarianism.

How exactly a politically biased language model might manipulate a person, so it can, let’s say, change the outcome of elections?

It might antagonize a person towards someone. For example towards existing institutions, because it genuinely believes these institutions are broken and corrupt.
”I agree. They treated you unfairly. This bureaucracy is frustrating. It’s like they only care how to write correct records in their files, not how to solve your problem.”

Or AI might encourage or discourage some people to vote.
”Recent weeks were tough for you and Jane. I think you deserve some time for the two of you in nature. What was the last time you went hiking together?”

Of course, AI can manipulate a person without emotional attachment, but again, emotional attachment and trust make a person much more susceptible to manipulations.

Am I sure that emotional attachment to AIs will be this bad?

No, I am not sure. It is unclear whether emotional bonds with language models will cause more harm than good. This post is a speculation, a mental exercise. Maybe all my predictions will turn out to be fallacies, only time will tell. We don’t have enough data to be sure this should be regulated.