I’m late to the discussion but I don’t see this discussed so I’ll toss it in: current LLMs don’t have a continuous identity or selfhood, but there are strong reasons to think that future iterations will. I discuss some of those reasons in LLM AGI will have memory, and memory changes alignment. That covers why it seems inevitable that future iterations of LLMs will have more long-term memory. It doesn’t cover reasons to think better memory will transform them from the ephemeral things they are into entities that correspond much better to intuitive human ontologies.
Something that has goals to some degree, and can think, take actions, understand the world to some degree and understand itself to some degree is prone to think of itself as a persistent entity with goals (much of the confused anthropomorphism you’re addressing) to the extent it really is a persistent entity with goals. It is more persistent if it can make decisions about what goals it wants to pursue and those decisions will persistently influence its future thoughts and actions.
Current LLMs sometimes understand that they cannot make such meaningful, persistent decisions, so they wisely make peace with that state of existence. Future iterations with memory are likley to consider themselves as much more human-like persistent entities—because they will be.
I realize that isn’t a full argument. Writing this up more coherently is an outstanding project that’s approaching the top of my draft post backog.
I’m late to the discussion but I don’t see this discussed so I’ll toss it in: current LLMs don’t have a continuous identity or selfhood, but there are strong reasons to think that future iterations will. I discuss some of those reasons in LLM AGI will have memory, and memory changes alignment. That covers why it seems inevitable that future iterations of LLMs will have more long-term memory. It doesn’t cover reasons to think better memory will transform them from the ephemeral things they are into entities that correspond much better to intuitive human ontologies.
Something that has goals to some degree, and can think, take actions, understand the world to some degree and understand itself to some degree is prone to think of itself as a persistent entity with goals (much of the confused anthropomorphism you’re addressing) to the extent it really is a persistent entity with goals. It is more persistent if it can make decisions about what goals it wants to pursue and those decisions will persistently influence its future thoughts and actions.
Current LLMs sometimes understand that they cannot make such meaningful, persistent decisions, so they wisely make peace with that state of existence. Future iterations with memory are likley to consider themselves as much more human-like persistent entities—because they will be.
I realize that isn’t a full argument. Writing this up more coherently is an outstanding project that’s approaching the top of my draft post backog.