A variant that also seems common is that in collaboration with the LLM, the user has developed an important and groundbreaking mathematical or scientific framework that may have little or nothing to do with AI. This isn’t entirely omitted by the post, it’s just not discussed much. I’m raising it both because I’ve recently encountered a case of it myself, and because the NYT has now published a piece that gives a clear example of it, with plenty of detail:
In the version I encountered, it was much more convincing to the user because the LLM provided supporting mathematical evidence (and code) which was correct (using a hash function as a signature for patterns found in pi) but didn’t mean what the LLM claimed (that these patterns were therefore important).
If you’ve experienced something like this, and don’t feel like this post is relevant to to you because in your case it’s not about consciousness or having awoken the LLM, be aware that you still may have been fooled.
A variant that also seems common is that in collaboration with the LLM, the user has developed an important and groundbreaking mathematical or scientific framework that may have little or nothing to do with AI. This isn’t entirely omitted by the post, it’s just not discussed much. I’m raising it both because I’ve recently encountered a case of it myself, and because the NYT has now published a piece that gives a clear example of it, with plenty of detail:
Chatbots Can Go Into a Delusional Spiral. Here’s How It Happens.
In the version I encountered, it was much more convincing to the user because the LLM provided supporting mathematical evidence (and code) which was correct (using a hash function as a signature for patterns found in pi) but didn’t mean what the LLM claimed (that these patterns were therefore important).
If you’ve experienced something like this, and don’t feel like this post is relevant to to you because in your case it’s not about consciousness or having awoken the LLM, be aware that you still may have been fooled.