A possible scenario could be that the first confirmed AGI(s) are completely unimpressive, i.e. with equivalent or less capacities than an average 10 year child. And then tremendous effort is put into ’growing’ its capacities with the best results yielding growth only along a sub exponential curve, probably with some plateaus as well. So that any AGI may take generations to become truly superhuman.
I’m asking this as I haven’t come across any serious prior discussion other than this: https://www.lesswrong.com/posts/77xLbXs6vYQuhT8hq/why-ai-may-not-foom, though admittedly my search was pretty brief.
Is there any serious expectation for this kind of scenario?