[Question] What would you expect a massive multimodal online federated learner to be capable of?

One intuitive picture I have for what very rapid ML capability gains might look like is a massive multimodal deep learning model that uses some form of online federated learning to continually learn from many devices simultaneously, and which is deployed to hundreds of millions or billions of users. For example, imagine a multimodal Google or Facebook chatbot with say 10 trillion parameters that could interact with billions of users simultaneously and which improved its weights from every interaction. My impression is that we basically have the tech for this today, or we will in the very near future. Now add in some RL to actively optimize some company-relevant goals (ad revenue, reported user satisfaction, etc.) and, intuitively at least, that seems to me like it’s very close to the kind of scary AGI we’ve been talking about. But that seems like it could easily be 2-5 years in the future rather than 10-50.

Did I misunderstand something? How close to scary-type AGI would you expect this kind of model to become within a few months after deployment?