[Question] Is there a name for the theory that “There will be fast takeoff in real-world capabilities because almost everything is AGI-complete”?

I think it’s plausible that:
1) For many applications, getting narrow AI to do a task well enough to be valuable doesn’t seem worth it, and likely isn’t (esp. when considering opportunity cost and alternative applications of AI).
2) Thus proto-AGI is actually not going to change the world that much
3) But OFC, AGI will (once/​assuming it’s cheap enough)

If correct, this could mean that people probably won’t really be impressed by narrow AI at any point, and then all of the sudden, we get AGI and everything changes rapidly.

I’m just sketching it out and probably didn’t do the best job, but my questions are: Is this something people have seen argued? Is there a name for it? (or want to propose one?)