[Question] Does object permanence of simulacrum affect LLMs’ reasoning?

Humans are constantly simulating the things around them, however they can rather easily shift attention and forget about the previous thing. So we can say humans’ simulacrum does not have object permanence.
On the other hand, AI language models prompted to write down their thoughts and reasoning cannot get rid of things they don’t need: that words will need to be shifted out from context window. So the simulated objects have a bit of permanence.
So, here is the question: does object permanence of simulacrum affect computational and reasoning abilities of LLMs compared to humans?

No answers.