I seen one difference from how human memory works—the model has to consciously decide which part of its experience is important to retain. Not sure how that will pay out when these models try to act as drop in replacements for human workers.
We choose what to attend, then emotional/reward salience directs us to replay or “think about” some memories more. This results in them being retained. Others are lost.
This is pretty clearly critical for human memory working as well as it does. It’s an emergent effect, not a single mechanism, but it’s clearly evolved that way because it works.
I broadly agree, and I also think this explains why it sucks now (the models aren’t capable of doing this explicitly very well yet) but could be extremely good in the future (it directly utilises the intelligence we’re already training, and therefore should improve with it automatically).
I seen one difference from how human memory works—the model has to consciously decide which part of its experience is important to retain. Not sure how that will pay out when these models try to act as drop in replacements for human workers.
Humans definitely do this too.
We choose what to attend, then emotional/reward salience directs us to replay or “think about” some memories more. This results in them being retained. Others are lost.
This is pretty clearly critical for human memory working as well as it does. It’s an emergent effect, not a single mechanism, but it’s clearly evolved that way because it works.
I broadly agree, and I also think this explains why it sucks now (the models aren’t capable of doing this explicitly very well yet) but could be extremely good in the future (it directly utilises the intelligence we’re already training, and therefore should improve with it automatically).