“AGI” doesn’t actually make ANY claim at all. That is my primary point, it is an utterly useless term, other than that is sufficiently meaningful and meaningless at the same time that it can be the basis for conveying an intangible concept.
YOU, specifically, have not made a single claim that can be falsified. Please point me at your claim if you think I missed it.
If that’s what “general” means, why not just say “conscious AI”? I suspect the answer is because the field has already come to terms with the fact that conscious machines are philosophically unattainable. Another word was needed that was both sufficiently meaningful and also sufficiently meaningless to refocus (or more accurately misdirect) attention to “The Thing Humans Do That Machines Don’t That Is Very Useful”.
The burden of defining concepts like “AGI” is on the true believers, not the skeptics. Labeling someone “disappointingly stupid” who isn’t making any non falsifiable claims about binary systems doing the “sort of stuff I can do”. Simply making fun of your critics for lacking sufficient imagination to comprehend your epistemically incoherent claims is nothing more than lazy burden shifting.
I do get a kick out of statements like “but you can’t explain to me how you recognize a cat” as if the epistemically weak explanations for human general intelligence excuse or even somehow validate epistemically weak explanations for AGI.