Yeah, spikiness has been an issue, but the floor is starting to get mopped up. The “unable to correctly cite a reference” thing isn’t quite fair anymore, though even current SOTA systems aren’t reliable in that regard.
The point about needing background is good, though the AI might need specialized training/instructions for specialized tasks in the same way a human would. There’s no way it could know a particular organization’s classified operating procedures from the factory, for example. Defining (strong) AGI as being able to perform every computer task at the level of the median human who has been given appropriate training (once it’s had the same training a human would get, if necessary) seems sensible. (I guess you could argue that that’s technically covered under the simpler definition.)
>AGI discussed in e.g. superintelligence
The fact that this is how you’re clarifying it shows my point. While I’ve heard people talking about “AGI” (meaning weak ASI) having significant impacts, it’s seldom discussed as leading to x-risks. To give maybe the most famous recent example, AI 2027 ascribes the drastic changes and extinction specifically to ASI. Do you have examples of mind of credible people specifically talking about x-risks from AGI? Bostrom’s book refers to superintelligence, not general intelligence, as the superhuman AI.
Also, I find the argument that we should continue to use muddled terms to avoid changing their meaning not that compelling, at least when it’s easy to clarify meanings where necessary.