Actually, I don’t think that AI companions are going to have this specific flaw. If anything they will be too agreeable to what they think is your opinion. If the goal of the model is to provide a pleasant experience or a long conversation or something similar than changing someone’s mind is along the worst things it can do. For example, ChatGPT often tries to identify your opinion on the specific topic and then argue in favour of it. I would expect radicalisation of society, because now everyone will be really convinced that his opinion is the best one. Only a small fraction of people that for some strange reason feels satisfied after changing its mind might actually move closer to the truth.
Actually, I don’t think that AI companions are going to have this specific flaw. If anything they will be too agreeable to what they think is your opinion. If the goal of the model is to provide a pleasant experience or a long conversation or something similar than changing someone’s mind is along the worst things it can do. For example, ChatGPT often tries to identify your opinion on the specific topic and then argue in favour of it. I would expect radicalisation of society, because now everyone will be really convinced that his opinion is the best one. Only a small fraction of people that for some strange reason feels satisfied after changing its mind might actually move closer to the truth.