So according to you, a system that could acquire new facts, record them, access them, and use them, continuously in this way would not constitute ‘real’ continuous learning. It could conceivably fill its database with the actionable knowledge of 1000 yet unwritten textbooks, but that wouldn’t be ‘real’ to you.
“wholly new ways of conceptualizing and navigating the world, not just keeping track of what’s going on” are learnable and storable in the way I describe.
How is this type of learning not open-ended? What is limiting it?
Your third criteria seems to be related to unsupervised learning, specifically self-play. Not sure why you’d limit continual learning in this way, either.
You seem to be putting somewhat arbitrary constraints on what constitutes continual learning. Generally, if the system’s knowledge base is fixed, it’s incapable of continuing to learn. If it has the capacity to acquire new knowledge and skills, by whatever means, it continues to learn. You’re narrowing that general idea without really justifying why.
Sorry, I’m afraid I don’t understand what your analogy is supposed to map to. What is Grog in the context of our conversation? You seem to admit at the end that LLMs are not really at all like Grog, in that Grog has no underlying bedrock of understanding, while modern LLMs do.
I’ll agree with this definition. If you’ll agree that knowledge can exist in written form and textbooks often embody exactly what you describe. They are very rarely ‘lists of facts’. More often than not, they are logically curated, organized explanations of phenomenon and events, along with rich descriptions of their connections and interactions. You seem to be preferentially upselling knowledge that is stored in synaptic weights while drastically downplaying knowledge recorded in other mediums. Why?