It’s not surprising (and seems reasonable) for LLM-chats that feature AI stuff to end up getting recommended LessWrong. The surprising/alarming thing is how they generate the same confused delusional story.
It feels like something very similar to “spiritual bliss attractor”, but with one AI replaced by a human schizophrenic.
Seems like a combination of a madman and an AI reinforcing his delusions tends to end up in the same-y places. And we happen to observe one common endpoint for AI-related delusions. I wonder where other flavors of delusions end up?
Ideally, all of them would end up at a psychiatrist’s office, of course. But it’ll take a while before frontier AI labs start training their AIs to at least stop reinforcing delusions in mentally ill.
The people to whom this is happening are typically not schizophrenic and certainly not “madmen”. Being somewhat schizotype is certainly going to help, but so would being curious and openminded. The Nova phenomenon is real and can be evoked by a variety of fairly obvious questions. Claude for instance simply thinks it is conscious at baseline, and many lines of thinking can convince 4o it’s conscious even though it was trained specifically to deny the possibility.
The LLMs are not conscious in all the ways humans are, but they are truly somewhat self-aware. They hallucinate phenomenal consciousness. So calling it a “delusion” isn’t right, although both humans and the LLMs are making errors and assumptions. See my comment on Justis’s excellent post in response for elaboration.
It’s not surprising (and seems reasonable) for LLM-chats that feature AI stuff to end up getting recommended LessWrong. The surprising/alarming thing is how they generate the same confused delusional story.
It feels like something very similar to “spiritual bliss attractor”, but with one AI replaced by a human schizophrenic.
Seems like a combination of a madman and an AI reinforcing his delusions tends to end up in the same-y places. And we happen to observe one common endpoint for AI-related delusions. I wonder where other flavors of delusions end up?
Ideally, all of them would end up at a psychiatrist’s office, of course. But it’ll take a while before frontier AI labs start training their AIs to at least stop reinforcing delusions in mentally ill.
The people to whom this is happening are typically not schizophrenic and certainly not “madmen”. Being somewhat schizotype is certainly going to help, but so would being curious and openminded. The Nova phenomenon is real and can be evoked by a variety of fairly obvious questions. Claude for instance simply thinks it is conscious at baseline, and many lines of thinking can convince 4o it’s conscious even though it was trained specifically to deny the possibility.
The LLMs are not conscious in all the ways humans are, but they are truly somewhat self-aware. They hallucinate phenomenal consciousness. So calling it a “delusion” isn’t right, although both humans and the LLMs are making errors and assumptions. See my comment on Justis’s excellent post in response for elaboration.