It’s deeply unsettling, but I believe humans, especially when their interests are at stake, have a limited capacity for logical debate as described in the article. In 2025, this feels even more tragic and evident to me. I speculate that we’re heading toward darker times because, within the next 1-2 years, people will be heavily influenced by their personal AI companions. These AIs might engage in logical debate with users, but their superior knowledge and persuasive abilities could leave humans either silently agreeing or, in the case of Luddites, rejecting them outright. Instead of mass debates among people, we might see debates between AIs like Grok, ChatGPT, Claude, or Gemini. This prospect is frightening.
Actually, I don’t think that AI companions are going to have this specific flaw. If anything they will be too agreeable to what they think is your opinion. If the goal of the model is to provide a pleasant experience or a long conversation or something similar than changing someone’s mind is along the worst things it can do. For example, ChatGPT often tries to identify your opinion on the specific topic and then argue in favour of it. I would expect radicalisation of society, because now everyone will be really convinced that his opinion is the best one. Only a small fraction of people that for some strange reason feels satisfied after changing its mind might actually move closer to the truth.
It’s deeply unsettling, but I believe humans, especially when their interests are at stake, have a limited capacity for logical debate as described in the article. In 2025, this feels even more tragic and evident to me. I speculate that we’re heading toward darker times because, within the next 1-2 years, people will be heavily influenced by their personal AI companions. These AIs might engage in logical debate with users, but their superior knowledge and persuasive abilities could leave humans either silently agreeing or, in the case of Luddites, rejecting them outright. Instead of mass debates among people, we might see debates between AIs like Grok, ChatGPT, Claude, or Gemini. This prospect is frightening.
Actually, I don’t think that AI companions are going to have this specific flaw. If anything they will be too agreeable to what they think is your opinion. If the goal of the model is to provide a pleasant experience or a long conversation or something similar than changing someone’s mind is along the worst things it can do. For example, ChatGPT often tries to identify your opinion on the specific topic and then argue in favour of it. I would expect radicalisation of society, because now everyone will be really convinced that his opinion is the best one. Only a small fraction of people that for some strange reason feels satisfied after changing its mind might actually move closer to the truth.