It’s a huge problem, and even if you think you know about AI or work with it daily, it can impact you. As both you and @dr_s just mentioned a while ago, there’s absolutely a religious component to LLM induced sycophancy, and I even hinted at it in my inaugural post as well, although that was more about growing up Jewish and being primed for Pascal’s Mugging for ASI takeoff events since it’s eerily similar to reward and punishment theology.
Still, one thing that is not often mentioned is the impact LLM sycophancy has on the “high functioning autistic” population, many of whom suffer from chronic loneliness and are perfect candidates to be showered with endless praise by the LLM companion of their choosing. Believe me, it’s soothing, but at what cost?
I happen to agree with you that frontier labs creating an open, public repository to share LLM conversations can be a stellar form of RLHF, and even mitigate the worst symptoms of the psychosis that we’re seeing, although I don’t know if that will win over all the critics?
Also I know that there’s the Garcia vs CharacterAI lawsuit which sadly involves an autistic teenager dying by suicide but I was specifically mentioning cases where the person(s) are alive, but still use the AI models as companions/girlfriends etc.
I wrote about ChatGPT induced sycophancy as my inaugural post on LessWrong.
It’s a huge problem, and even if you think you know about AI or work with it daily, it can impact you. As both you and @dr_s just mentioned a while ago, there’s absolutely a religious component to LLM induced sycophancy, and I even hinted at it in my inaugural post as well, although that was more about growing up Jewish and being primed for Pascal’s Mugging for ASI takeoff events since it’s eerily similar to reward and punishment theology.
Still, one thing that is not often mentioned is the impact LLM sycophancy has on the “high functioning autistic” population, many of whom suffer from chronic loneliness and are perfect candidates to be showered with endless praise by the LLM companion of their choosing. Believe me, it’s soothing, but at what cost?
I happen to agree with you that frontier labs creating an open, public repository to share LLM conversations can be a stellar form of RLHF, and even mitigate the worst symptoms of the psychosis that we’re seeing, although I don’t know if that will win over all the critics?
Time will tell, I guess?
Also I know that there’s the Garcia vs CharacterAI lawsuit which sadly involves an autistic teenager dying by suicide but I was specifically mentioning cases where the person(s) are alive, but still use the AI models as companions/girlfriends etc.