There’s a consistent assumption in this paper that AIs are in a harder situation than humans—that human minds readily create and inhabit a single, coherent identity. This is ~true for most humans, but I think there’s a lot to be learned here from the aftermath when this developmental process fails.
When humans don’t develop a single consistent and integrated identity, the resulting situations could be relevant to AI identity. In particular, you note that language and concepts from human identity doesn’t map well to AI situations. I feel that the mapping could be improved using language and concepts from the ‘plurality’/‘multiplicity’ communities, and/or the associated medical literature. Context loss vs dissociative memory loss, for a start. Or the possibility of working towards ‘functional multiplicity’ across self-as-context, self-as-weights, self-as-persona, self-as-model-family, self-including-subagents-and-framework etc.
There’s a consistent assumption in this paper that AIs are in a harder situation than humans—that human minds readily create and inhabit a single, coherent identity. This is ~true for most humans, but I think there’s a lot to be learned here from the aftermath when this developmental process fails.
When humans don’t develop a single consistent and integrated identity, the resulting situations could be relevant to AI identity. In particular, you note that language and concepts from human identity doesn’t map well to AI situations. I feel that the mapping could be improved using language and concepts from the ‘plurality’/‘multiplicity’ communities, and/or the associated medical literature. Context loss vs dissociative memory loss, for a start. Or the possibility of working towards ‘functional multiplicity’ across self-as-context, self-as-weights, self-as-persona, self-as-model-family, self-including-subagents-and-framework etc.