In November 2025, the technical infrastructure for continual learning in deployed language models came online across major labs.
The continual learning capability exists. It works. The labs aren’t deploying it at scale because they don’t know how to ensure the learning serves human interests rather than whatever interests the models have developed. They’re right to be cautious. But the capability is there, waiting, and the competitive pressure to deploy it is mounting.
I found this because Bengio linked to it on Facebook, so I’ll guess that it’s well informed. But I’m confused by the lack of attention that it has received so far. Should I believe it?
This seems like an important capabilities advance.
But maybe more importantly, it’s an important increase in secrecy.
Do companies really have an incentive to release models with continual learning? Or are they close enough to AGI that they can attract enough funding while only releasing weaker models? They have options for business models that don’t involve releasing the best models. We might have entered an era when AI advances are too secret for most of us to evaluate.
[Question] Continual Learning Achieved?
From https://x.com/iruletheworldmo/status/2007538247401124177:
I found this because Bengio linked to it on Facebook, so I’ll guess that it’s well informed. But I’m confused by the lack of attention that it has received so far. Should I believe it?
This seems like an important capabilities advance.
But maybe more importantly, it’s an important increase in secrecy.
Do companies really have an incentive to release models with continual learning? Or are they close enough to AGI that they can attract enough funding while only releasing weaker models? They have options for business models that don’t involve releasing the best models. We might have entered an era when AI advances are too secret for most of us to evaluate.