It was actually one of the easier things to predcit if you know something about how everyday humans actually think (like, evolved heuristicd, etc.). The added base knowledge of how LLMs reason is easily obtained by laymen, and I am sure some pychologists and marketers saw this coming before ML people did.
Here is my one-shot explanation: AIs are really convincingly talking like humans, but unlike humans, their semantic ontologies are not derived from causally validated patterns, only correlation.
Oh yeah, also all commercial incentives are for them to discount truth and suck up to the user.
It was actually one of the easier things to predcit if you know something about how everyday humans actually think (like, evolved heuristicd, etc.). The added base knowledge of how LLMs reason is easily obtained by laymen, and I am sure some pychologists and marketers saw this coming before ML people did.
Here is my one-shot explanation: AIs are really convincingly talking like humans, but unlike humans, their semantic ontologies are not derived from causally validated patterns, only correlation.
Oh yeah, also all commercial incentives are for them to discount truth and suck up to the user.