There is no consensus definition of transformational but I think this is simply wrong, in the sense that LLMs being stuck without continual learning at essentially current levels would not stop them from having a transformational impact
IMO, if LLMs get stuck at the current level of incapabilities, they’d be 7-7.5 on the Technological Richter Scale. (Maybe an 8, but I think that’s paying too much attention to how impressive-in-themselves they are, and failing to correctly evaluate the counterfactual real-world value they actually add.) That doesn’t cross my threshold for “transformative” in the context of AI.
Yes, it’s all kinds of Big Deal if we’re operating on mundane-world logic. But when the reference point is the Singularity, it’s just not that much of a big deal.
IMO, if LLMs get stuck at the current level of incapabilities, they’d be 7-7.5 on the Technological Richter Scale. (Maybe an 8, but I think that’s paying too much attention to how impressive-in-themselves they are, and failing to correctly evaluate the counterfactual real-world value they actually add.) That doesn’t cross my threshold for “transformative” in the context of AI.
Yes, it’s all kinds of Big Deal if we’re operating on mundane-world logic. But when the reference point is the Singularity, it’s just not that much of a big deal.