Definitely possible, I’m trying to replicate these myself. Current vibe is that AI mostly gives aligned / boring answers
So we assume that the prompts contained most of the semantics for those other pieces, right? I saw a striking one without the prompt included and figured it was probably prompted in that direction.
There are 2 plausible hypotheses:
By default the model gives ‘boring’ responses and people share the cherry-picked cases where the model says something ‘weird’
People nudge the model to be ‘weird’ and then don’t share the full prompting setup, which is indeed annoying
Given the realities of social media I’d guess it’s mostly 2 and some more directly deceptive omission of prompting in that direction.
Definitely possible, I’m trying to replicate these myself. Current vibe is that AI mostly gives aligned / boring answers
So we assume that the prompts contained most of the semantics for those other pieces, right? I saw a striking one without the prompt included and figured it was probably prompted in that direction.
There are 2 plausible hypotheses:
By default the model gives ‘boring’ responses and people share the cherry-picked cases where the model says something ‘weird’
People nudge the model to be ‘weird’ and then don’t share the full prompting setup, which is indeed annoying
Given the realities of social media I’d guess it’s mostly 2 and some more directly deceptive omission of prompting in that direction.