My strong hunch is that this is true for almost any form of communication (internal and external) we receive: it conveys something we can extract value from if we are able to look past the surface (propositional content) of what we immediately infer.
And how difficult it is to remain open to the possibility that my first impression of a signal is “incorrect” (I got it wrong on my first attempt), given how frequently I have used my inference (first impression), and I am still alive (adaptive value of my past choice to not question my first impression)...
The best I can offer is to make it a regular but not constant practice to use, say, 15 to 30 minutes a day to go through some kind of “habitual though journal,” asking myself if and when some of my automatic inferences might have been wrong, mostly just to play with that possibility, so that those kinds of mental avenues become more readily available in the moment when I need them. It’s important to raise the stakes during that practice, so the more I can make it resemble the real deal (for instance by role playing situations with a conversational partner), the less “artificial” and more “transferable” does this learning become.
All in all an excellent primer on the issue and useful extensions!
I don’t comment a lot, but I felt this one was definitely worth the read and my time.
While I don’t necessarily agree with every aspect, much of this resonated with how I see social media has (been) warped from a regular market of social connection to a lemon market, where the connection is crappy, and many sane people I know are blinding themselves to it (leaving in some corners behind a cesspool of the dopamine hit addicted).
Ultimately, this also seems to be true about how people have responded to the latest wave of human-rights initiatives (DEI) carried into the workplace by HR departments, where a small number of bad actors have capitalized on the overall naive assumption that “supporting the underdog is a good thing to do.”
The predictability of human behavior creates an attack surface for actors who can find ways to extract value from this fact, and this will certainly apply to how humans interact with AI. I found it interesting that on the same day as this article hit my email inbox, Bruce Schneier’s Cryptogram (monthly newsletter) also contained a reference to the OODA loop, and the adversarial attempt to “get into” one’s enemy’s loop in order to exert control and win.
Our (consumer basis) naive trust in the moral neutrality of LLM remains unchanged, it is only a matter of time until some actors will find a near perfect attack surface to get far deeper into our decision making than social media ever could…