I’m glad you asked. I completely agree that nothing in the current LLM architecture prevents that technically and I expect that it will happen eventually.
The issue in the near future is practicality, because training models is currently—and will in the near future still be—very expensive. Inference is less expensive, but still so expensive that profit is only possible by serving the model statically (i.e., without changing its weights) to many clients, which amortizes the cost of training and inference.
These clients often rely heavily on models being static, because it makes its behavior predictable enough to be suitable for a production environment. For example, if you use a model for a chat bot on your company’s website, you wouldn’t want its personality to change based on what people say to it. We’ve seen that go wrong very quickly with Microsoft’s Twitter bot Tay.
It’s also a question whether you want your model to internalize new concepts (let’s just call it “continual learning”) based on everybody’s data or based on just your data. Using everybody’s data is more practical in the sense that you just update the one model that everybody uses (which is something that’s in a sense already happening when they move the cutoff date of the training data forward for the latest models), but it’s not something that users will necessarily be comfortable with. For example, users won’t want a model to leak their personal information to others. There are also legal barriers here, of course, especially with proprietary data.
People will probably be more comfortable with a model that updates just on their data, but that’s not practical (yet) in the sense that you would need the compute resources to be cheap enough to run an entire, slightly different model for each specific use case. It can already be done to some degree with fine-tuning, but that doesn’t change the weights of the entire model (that would be prohibitively expensive with current technology) and I don’t think this form of fine-tuning is able to implement continual learning effectively (but I’m happy to be proven wrong here).
Training a LoRA has a negligible cost compared to pre-training a full model because it only involves changing 1.5% to 7% of the parameters (per https://ar5iv.labs.arxiv.org/html/2502.16894#A6.SS1) and only on thousands to millions of tokens instead of trillions.
You probably don’t need continual learning for a tech support use-case. I suspect you might need it for a task so long that all the reasoning chain doesn’t fit into your model’s effective context length (which is shorter than the advertised one). On these tasks the inference is going to be comparatively costly just because of the test-time scaling required, and users might be incentivized by discounts or limited free use if they agree that their dialogs will be used for improving the model.
I’m glad you asked. I completely agree that nothing in the current LLM architecture prevents that technically and I expect that it will happen eventually.
The issue in the near future is practicality, because training models is currently—and will in the near future still be—very expensive. Inference is less expensive, but still so expensive that profit is only possible by serving the model statically (i.e., without changing its weights) to many clients, which amortizes the cost of training and inference.
These clients often rely heavily on models being static, because it makes its behavior predictable enough to be suitable for a production environment. For example, if you use a model for a chat bot on your company’s website, you wouldn’t want its personality to change based on what people say to it. We’ve seen that go wrong very quickly with Microsoft’s Twitter bot Tay.
It’s also a question whether you want your model to internalize new concepts (let’s just call it “continual learning”) based on everybody’s data or based on just your data. Using everybody’s data is more practical in the sense that you just update the one model that everybody uses (which is something that’s in a sense already happening when they move the cutoff date of the training data forward for the latest models), but it’s not something that users will necessarily be comfortable with. For example, users won’t want a model to leak their personal information to others. There are also legal barriers here, of course, especially with proprietary data.
People will probably be more comfortable with a model that updates just on their data, but that’s not practical (yet) in the sense that you would need the compute resources to be cheap enough to run an entire, slightly different model for each specific use case. It can already be done to some degree with fine-tuning, but that doesn’t change the weights of the entire model (that would be prohibitively expensive with current technology) and I don’t think this form of fine-tuning is able to implement continual learning effectively (but I’m happy to be proven wrong here).
Training a LoRA has a negligible cost compared to pre-training a full model because it only involves changing 1.5% to 7% of the parameters (per https://ar5iv.labs.arxiv.org/html/2502.16894#A6.SS1) and only on thousands to millions of tokens instead of trillions.
Inferencing different LoRAs for the same model in large batches with current technology is also very much possible (even if not without some challenges), and OpenAI offers their finetuned models for just 1.5-2x the cost of the original ones: https://docs.titanml.co/conceptual-guides/gpu_mem_mangement/batched_lora_inference
You probably don’t need continual learning for a tech support use-case. I suspect you might need it for a task so long that all the reasoning chain doesn’t fit into your model’s effective context length (which is shorter than the advertised one). On these tasks the inference is going to be comparatively costly just because of the test-time scaling required, and users might be incentivized by discounts or limited free use if they agree that their dialogs will be used for improving the model.