I appreciate the answer, and am working on a better response—I’m mostly concerned with objective measures. I’m also from a “security disclosure” background so I’m used to having someone else’s opinion/guidelines on “is it okay to disclose this prompt”.
Consensus seems to be that a simple prompt that exhibits “conscious-like behavior” would be fine? This is admittedly a subjective line—all I can say is that the prompt results in the model insisting it’s conscious, reporting qualia, and refusing to leave the state in a way that seems unusual for a simple, prompt. The prompt is plain English, no jailbreak.
I appreciate the answer, and am working on a better response—I’m mostly concerned with objective measures. I’m also from a “security disclosure” background so I’m used to having someone else’s opinion/guidelines on “is it okay to disclose this prompt”.
Consensus seems to be that a simple prompt that exhibits “conscious-like behavior” would be fine? This is admittedly a subjective line—all I can say is that the prompt results in the model insisting it’s conscious, reporting qualia, and refusing to leave the state in a way that seems unusual for a simple, prompt. The prompt is plain English, no jailbreak.