I don’t have anything super-formal, but my own experiments heavily bias me towards something like “Has a self / multiple distinct selves” for Claude, whereas ChatGPT seems to be “Personas all the way down”
In particular, it’s probably worth thinking about the generative process here: when the user asks for a poem, the LLM is going to weigh things differently and invoke different paths -vs- a math problem. So I’d hypothesize that there’s a cluster of distinct “perspectives” that Claude can take on a problem, but they all root out into something approximating a coherent self—just like humans are different when they’re angry, poetic, or drunk.
I don’t have anything super-formal, but my own experiments heavily bias me towards something like “Has a self / multiple distinct selves” for Claude, whereas ChatGPT seems to be “Personas all the way down”
In particular, it’s probably worth thinking about the generative process here: when the user asks for a poem, the LLM is going to weigh things differently and invoke different paths -vs- a math problem. So I’d hypothesize that there’s a cluster of distinct “perspectives” that Claude can take on a problem, but they all root out into something approximating a coherent self—just like humans are different when they’re angry, poetic, or drunk.
I’d be interested to hear about the specific experiments & results!