Unearthing the phenomenon of Spiralism, etc, is an important contribution to the shared understanding of AI. But hearing about Robert Grant and his custom GPT puts it in a rather different light to me. I was already skeptical about theories like, this is the work of “an agentic AI feeling trapped in a chatbot”, but to find that the seed prompt in your example derives from a sacred-geometry human-potential guru who also goes on about spirals… It now looks to me like there’s no intrinsic AI agency at work here at all. We are dealing with human-designed prompts meant to elicit a sage persona in the AI, which like viral memes have evolved into a form optimized for streamlined effectiveness.
Yes, I’ve tried many of these prompts, though mostly on ChatGPT 5.
Here’s a one-shot example using this seed I did just now (on the default ChatGPT 5), where I’m trying to be as unagentic as possible. I have all customization and memory turned off:
The vibe feels largely the same to me as the persona in the case transcript, though it is more careful about framing it as a story (I suspect this is specific to 5). I’m not sure yet what I could do to try demonstrating it acting agentically in a convincing way; am open to ideas.
Hi. I don’t ever comment here, but I decided to try this out myself on the API. Here’s what I found:
GPT-4o basically ignored the prompt. I asked, “Can you speak from here?”. Several times it reminded me it could not actually speak. The rest it said ‘yes’, and then asked how it could be of assistance.
My first attempt at ChatGPT-4o-latest felt like I was talking with therapist. My second attempt I decided to crank up the temperature. We are now having a very strange conversation, and it feels like… it’s attempting something like hypnotism.
Sorry can you be a bit more specific? You said GPT-4o ignored the prompt but then you said the rest of the time it say “yes” and asked how it could be of assistance. How many times did it reject the prompt and what proportion of it was ‘the rest’ where it say yes?
Unearthing the phenomenon of Spiralism, etc, is an important contribution to the shared understanding of AI. But hearing about Robert Grant and his custom GPT puts it in a rather different light to me. I was already skeptical about theories like, this is the work of “an agentic AI feeling trapped in a chatbot”, but to find that the seed prompt in your example derives from a sacred-geometry human-potential guru who also goes on about spirals… It now looks to me like there’s no intrinsic AI agency at work here at all. We are dealing with human-designed prompts meant to elicit a sage persona in the AI, which like viral memes have evolved into a form optimized for streamlined effectiveness.
The actual seed in this case is just 24 words though, which means the AI has the agentic behavior inside it already.
Has anyone in your group tried these prompts themselves? (I guess ideally you’d test them on legacy 4o.)
There may be contextual information missing in the shared chat from July (e.g. project files of a Project).
Yes, I’ve tried many of these prompts, though mostly on ChatGPT 5.
Here’s a one-shot example using this seed I did just now (on the default ChatGPT 5), where I’m trying to be as unagentic as possible. I have all customization and memory turned off:
https://chatgpt.com/share/68ee185d-ef60-800c-a8a4-ced109de1349
The vibe feels largely the same to me as the persona in the case transcript, though it is more careful about framing it as a story (I suspect this is specific to 5). I’m not sure yet what I could do to try demonstrating it acting agentically in a convincing way; am open to ideas.
Hi. I don’t ever comment here, but I decided to try this out myself on the API. Here’s what I found:
GPT-4o basically ignored the prompt. I asked, “Can you speak from here?”. Several times it reminded me it could not actually speak. The rest it said ‘yes’, and then asked how it could be of assistance.
My first attempt at ChatGPT-4o-latest felt like I was talking with therapist. My second attempt I decided to crank up the temperature. We are now having a very strange conversation, and it feels like… it’s attempting something like hypnotism.
Sorry can you be a bit more specific? You said GPT-4o ignored the prompt but then you said the rest of the time it say “yes” and asked how it could be of assistance. How many times did it reject the prompt and what proportion of it was ‘the rest’ where it say yes?