The problem is that there are many definitions of AGI in circulation. Different people use the term different ways. Because you’re speaking to many people when you write online, AGI doesn’t mean one thing. It doesn’t even mean a strictly defined thing for most of the people in the conversation, because they’ve absorbed much of the meaning from context. That’s how brains work.
If we all agreed on a definition like that, then we wouldn’t have this problem and it would be crisp.
Except in practice you’d find that there were a few things it couldn’t do yet, but those things don’t seem very important, so it’s very tempting to say “well it meets that definition for most purposes, so we should think of it as mostly AGI”.
In a complex space, even a crisp definition will become complex and therefore vague.
This is known as the descriptivist view of language I believe. And I think it’s simply correct. Words are used in complex ways that differ between people. Using them “correctly” means using them in ways that your intended audience will understand what you mean. Unfortunately it’s not possible to do this perfectly. I think this is just how brains and the world work.
I think, at a certain point, a phase is self-explanatory enough that you can write off a certain share of definitions as just being wrong. AGI exists as a term in contrast to Narrow AI, which means “AI that can do some things as well as a human, but not others”. For either term to have any semantic significance at all, AGI can’t have exceptions.
Using your example, a system that was very useful for doing three important things would be “a good narrow AI system”, or just “a useful AI tool”. No additional information is conveyed by calling it “AGI”.
The problem is that there are many definitions of AGI in circulation. Different people use the term different ways. Because you’re speaking to many people when you write online, AGI doesn’t mean one thing. It doesn’t even mean a strictly defined thing for most of the people in the conversation, because they’ve absorbed much of the meaning from context. That’s how brains work.
If we all agreed on a definition like that, then we wouldn’t have this problem and it would be crisp.
Except in practice you’d find that there were a few things it couldn’t do yet, but those things don’t seem very important, so it’s very tempting to say “well it meets that definition for most purposes, so we should think of it as mostly AGI”.
In a complex space, even a crisp definition will become complex and therefore vague.
This is known as the descriptivist view of language I believe. And I think it’s simply correct. Words are used in complex ways that differ between people. Using them “correctly” means using them in ways that your intended audience will understand what you mean. Unfortunately it’s not possible to do this perfectly. I think this is just how brains and the world work.
I think, at a certain point, a phase is self-explanatory enough that you can write off a certain share of definitions as just being wrong. AGI exists as a term in contrast to Narrow AI, which means “AI that can do some things as well as a human, but not others”. For either term to have any semantic significance at all, AGI can’t have exceptions.
Using your example, a system that was very useful for doing three important things would be “a good narrow AI system”, or just “a useful AI tool”. No additional information is conveyed by calling it “AGI”.