I don’t comment a lot, but I felt this one was definitely worth the read and my time.
While I don’t necessarily agree with every aspect, much of this resonated with how I see social media has (been) warped from a regular market of social connection to a lemon market, where the connection is crappy, and many sane people I know are blinding themselves to it (leaving in some corners behind a cesspool of the dopamine hit addicted).
Ultimately, this also seems to be true about how people have responded to the latest wave of human-rights initiatives (DEI) carried into the workplace by HR departments, where a small number of bad actors have capitalized on the overall naive assumption that “supporting the underdog is a good thing to do.”
The predictability of human behavior creates an attack surface for actors who can find ways to extract value from this fact, and this will certainly apply to how humans interact with AI. I found it interesting that on the same day as this article hit my email inbox, Bruce Schneier’s Cryptogram (monthly newsletter) also contained a reference to the OODA loop, and the adversarial attempt to “get into” one’s enemy’s loop in order to exert control and win.
Our (consumer basis) naive trust in the moral neutrality of LLM remains unchanged, it is only a matter of time until some actors will find a near perfect attack surface to get far deeper into our decision making than social media ever could…
I don’t comment a lot, but I felt this one was definitely worth the read and my time.
While I don’t necessarily agree with every aspect, much of this resonated with how I see social media has (been) warped from a regular market of social connection to a lemon market, where the connection is crappy, and many sane people I know are blinding themselves to it (leaving in some corners behind a cesspool of the dopamine hit addicted).
Ultimately, this also seems to be true about how people have responded to the latest wave of human-rights initiatives (DEI) carried into the workplace by HR departments, where a small number of bad actors have capitalized on the overall naive assumption that “supporting the underdog is a good thing to do.”
The predictability of human behavior creates an attack surface for actors who can find ways to extract value from this fact, and this will certainly apply to how humans interact with AI. I found it interesting that on the same day as this article hit my email inbox, Bruce Schneier’s Cryptogram (monthly newsletter) also contained a reference to the OODA loop, and the adversarial attempt to “get into” one’s enemy’s loop in order to exert control and win.
Our (consumer basis) naive trust in the moral neutrality of LLM remains unchanged, it is only a matter of time until some actors will find a near perfect attack surface to get far deeper into our decision making than social media ever could…