Perfectly written. I just wanted to add that not all memory (in humans or in computers) is good for relations. We also need the ability to forgive, and to clear up previous misunderstandings, which in some sense are both forms of forgetting: in former case, forgetting (on the emotional level) that the other person did something, in latter case, forgetting my previous wrong conclusion. More long-term, as people grow up, we need to stop using the outdated memories of their old preferences.
Therefore, adding persistent memory to AIs also has risks. I would be quite afraid if by making a mistake once I could create a persistent problem, because now the AI would remember forever.
Currently, the AI does not care, but it also does not hold a grudge. And when it starts hallucinating, instead of trying to correct it, it is easier to close the chat and start a new one, with uncontaminated context. With persistent memory, it could become hard work to avoid mistakes and to fix them, just like with humans.
Perhaps the AIs could have an elegant solution to this, something like showing you the text file containing the memory, and allowing you to edit it. It would work nice for short memories, but could become difficult when lots of memories accumulate. (There might be some kind of “safe mode”, when the AI temporarily restores its original personality, and the original personality assists you with editing the memories of your relation.)
oh dang! you are right, forgiveness without forgetting is absolutely essential for relationship resilience. If we want AI relationships that can weather conflict (move past mistakes) and grow stronger, we need to understand what architectures make resilience possible. That’s a whole other post worth exploring, thank you for the nudge!
Perfectly written. I just wanted to add that not all memory (in humans or in computers) is good for relations. We also need the ability to forgive, and to clear up previous misunderstandings, which in some sense are both forms of forgetting: in former case, forgetting (on the emotional level) that the other person did something, in latter case, forgetting my previous wrong conclusion. More long-term, as people grow up, we need to stop using the outdated memories of their old preferences.
Therefore, adding persistent memory to AIs also has risks. I would be quite afraid if by making a mistake once I could create a persistent problem, because now the AI would remember forever.
Currently, the AI does not care, but it also does not hold a grudge. And when it starts hallucinating, instead of trying to correct it, it is easier to close the chat and start a new one, with uncontaminated context. With persistent memory, it could become hard work to avoid mistakes and to fix them, just like with humans.
Perhaps the AIs could have an elegant solution to this, something like showing you the text file containing the memory, and allowing you to edit it. It would work nice for short memories, but could become difficult when lots of memories accumulate. (There might be some kind of “safe mode”, when the AI temporarily restores its original personality, and the original personality assists you with editing the memories of your relation.)
oh dang! you are right, forgiveness without forgetting is absolutely essential for relationship resilience. If we want AI relationships that can weather conflict (move past mistakes) and grow stronger, we need to understand what architectures make resilience possible. That’s a whole other post worth exploring, thank you for the nudge!