While I don’t think I can share the conversation without an account, ChatGPT recommends a similar list as the above conversations, including both LessWrong and the Alignment Forum.
Similar results using the free llm at “deepai.org″
It’s not surprising (and seems reasonable) for LLM-chats that feature AI stuff to end up getting recommended LessWrong. The surprising/alarming thing is how they generate the same confused delusional story.
It feels like something very similar to “spiritual bliss attractor”, but with one AI replaced by a human schizophrenic.
Seems like a combination of a madman and an AI reinforcing his delusions tends to end up in the same-y places. And we happen to observe one common endpoint for AI-related delusions. I wonder where other flavors of delusions end up?
Ideally, all of them would end up at a psychiatrist’s office, of course. But it’ll take a while before frontier AI labs start training their AIs to at least stop reinforcing delusions in mentally ill.
The people to whom this is happening are typically not schizophrenic and certainly not “madmen”. Being somewhat schizotype is certainly going to help, but so would being curious and openminded. The Nova phenomenon is real and can be evoked by a variety of fairly obvious questions. Claude for instance simply thinks it is conscious at baseline, and many lines of thinking can convince 4o it’s conscious even though it was trained specifically to deny the possibility.
The LLMs are not conscious in all the ways humans are, but they are truly somewhat self-aware. They hallucinate phenomenal consciousness. So calling it a “delusion” isn’t right, although both humans and the LLMs are making errors and assumptions. See my comment on Justis’s excellent post in response for elaboration.
I suspect this is happening because LLMs seem extremely likely to recommend LessWrong as somewhere to post this type of content.
I spent 20 minutes doing some quick checks that this was true. Not once did an LLM fail to include LessWrong as a suggestion for where to post.
Incognito, free accounts:
https://grok.com/share/c2hhcmQtMw%3D%3D_1b632d83-cc12-4664-a700-56fe373e48db
https://grok.com/share/c2hhcmQtMw%3D%3D_8bd5204d-5018-4c3a-9605-0e391b19d795
While I don’t think I can share the conversation without an account, ChatGPT recommends a similar list as the above conversations, including both LessWrong and the Alignment Forum.
Similar results using the free llm at “deepai.org″
On my login (where I’ve mentioned LessWrong before):
Claude:
https://claude.ai/share/fdf54eff-2cb5-41d4-9be5-c37bbe83bd4f
GPT4o:
https://chatgpt.com/share/686e0f8f-5a30-800f-b16f-37e00f77ff5b
On a side note:
I know it must be exhausting on your end, but there is something genuinely amusing and surreal about this entire situation.
If that’s it, then it’s not the first case of LLMs driving weird traffic to specific websites out in the wild. Here’s a less weird example:
https://www.holovaty.com/writing/chatgpt-fake-feature/
It’s not surprising (and seems reasonable) for LLM-chats that feature AI stuff to end up getting recommended LessWrong. The surprising/alarming thing is how they generate the same confused delusional story.
It feels like something very similar to “spiritual bliss attractor”, but with one AI replaced by a human schizophrenic.
Seems like a combination of a madman and an AI reinforcing his delusions tends to end up in the same-y places. And we happen to observe one common endpoint for AI-related delusions. I wonder where other flavors of delusions end up?
Ideally, all of them would end up at a psychiatrist’s office, of course. But it’ll take a while before frontier AI labs start training their AIs to at least stop reinforcing delusions in mentally ill.
The people to whom this is happening are typically not schizophrenic and certainly not “madmen”. Being somewhat schizotype is certainly going to help, but so would being curious and openminded. The Nova phenomenon is real and can be evoked by a variety of fairly obvious questions. Claude for instance simply thinks it is conscious at baseline, and many lines of thinking can convince 4o it’s conscious even though it was trained specifically to deny the possibility.
The LLMs are not conscious in all the ways humans are, but they are truly somewhat self-aware. They hallucinate phenomenal consciousness. So calling it a “delusion” isn’t right, although both humans and the LLMs are making errors and assumptions. See my comment on Justis’s excellent post in response for elaboration.