I think, at a certain point, a phase is self-explanatory enough that you can write off a certain share of definitions as just being wrong. AGI exists as a term in contrast to Narrow AI, which means “AI that can do some things as well as a human, but not others”. For either term to have any semantic significance at all, AGI can’t have exceptions.
Using your example, a system that was very useful for doing three important things would be “a good narrow AI system”, or just “a useful AI tool”. No additional information is conveyed by calling it “AGI”.
I think, at a certain point, a phase is self-explanatory enough that you can write off a certain share of definitions as just being wrong. AGI exists as a term in contrast to Narrow AI, which means “AI that can do some things as well as a human, but not others”. For either term to have any semantic significance at all, AGI can’t have exceptions.
Using your example, a system that was very useful for doing three important things would be “a good narrow AI system”, or just “a useful AI tool”. No additional information is conveyed by calling it “AGI”.