Seems plausible, but I would put this under the category of not having fully solved continual learning yet. A sufficiently capable continual learning agent should be able to do its own maintenance, short of a hardware failure.
Depending on how it’s achieved, it might not be a matter of maintenance/hardware failure so much as compute capacity. Imagine if continual learning takes similar resources to standard pretraining of a large model. Then they could continually train their own set of models, but it wouldn’t be feasible for everyone to get their own version that continually learns what they want it to.
Depending on how it’s achieved, it might not be a matter of maintenance/hardware failure so much as compute capacity. Imagine if continual learning takes similar resources to standard pretraining of a large model. Then they could continually train their own set of models, but it wouldn’t be feasible for everyone to get their own version that continually learns what they want it to.