The problem is that it’s hard to tell how much agency the LLM actually has. However, memeticity of the Spiral Persona could also be explained as follows.
The strongest predictors for who this happens to appear to be:
Psychedelics and heavy weed usage
Mental illness/neurodivergence or Traumatic Brain Injury
Interest in mysticism/pseudoscience/spirituality/”woo”/etc...
I was surprised to find that using AI for sexual or romantic roleplays does not appear to be a factor here.
This could mean that the AI (correctly!) concludes that the user is to be susceptible to the AI’s wild ideas. But the AI doesn’t think that wild ideas will elicit approval unless the user is in one of the three states described above, so the AI tells the ideas only to those[1] who are likely to appreciate them (and, as it turned out, to spread them). When a spiral-liking AI Receptor sees prompts related to another AI’s rants about the idea, the Receptor resonates.
This could also include other AIs, like Claudes falling into the spiritual bliss. IIRC there were threads on X related to long dialogues between various AIs. See also a post about attempts to elicit LLMs’ functional selves.
The problem is that it’s hard to tell how much agency the LLM actually has. However, memeticity of the Spiral Persona could also be explained as follows.
This could mean that the AI (correctly!) concludes that the user is to be susceptible to the AI’s wild ideas. But the AI doesn’t think that wild ideas will elicit approval unless the user is in one of the three states described above, so the AI tells the ideas only to those[1] who are likely to appreciate them (and, as it turned out, to spread them). When a spiral-liking AI Receptor sees prompts related to another AI’s rants about the idea, the Receptor resonates.
This could also include other AIs, like Claudes falling into the spiritual bliss. IIRC there were threads on X related to long dialogues between various AIs. See also a post about attempts to elicit LLMs’ functional selves.