Sure, people are going to disagree about exactly where to draw the boundaries of AGI, and yet AGI remains a useful concept, even if we can’t fully agree on what counts as it. That’s in part why I think the idea of “minimum viable AGI” is useful, to be able to point to this space where we’re not so far along that everyone will agree it’s AGI, but far enough that thinking of it as AGI is reasonable.
To put a finer point on it: AGI isn’t a thing (it’s a cloud of things) so debating whether “it’s here” is a waste of time. What’s important is discussing what’s actually here (which you do) and the implications of whatever-this-is being here. Which you leave implicit.
FWIW I think your perspective is a little different since you’re dealing with these systems mostly in the area they were most designed for, coding. Their competence falls off pretty steeply in other areas.
Isn’t the definition of AGI the opposite of that? A computer program that is capable of any task that a human is capable of doing via operating a computer seems like a fairly strict definition, and certainly seems to preclude it being a “cloud of things”.
You could make it stricter by applying a percentile to it explicitly[1], so that you can rigorously test it. “In all tasks we were able to define, AGI must perform better on the target metric than 50 percent of human participants”. Still, either way, It’s a binary thing. If you can define a computer operation task that a typical human can do but an AI system can’t, then it’s unambiguously narrow rather than general AI.
The problem is that there are many definitions of AGI in circulation. Different people use the term different ways. Because you’re speaking to many people when you write online, AGI doesn’t mean one thing. It doesn’t even mean a strictly defined thing for most of the people in the conversation, because they’ve absorbed much of the meaning from context. That’s how brains work.
If we all agreed on a definition like that, then we wouldn’t have this problem and it would be crisp.
Except in practice you’d find that there were a few things it couldn’t do yet, but those things don’t seem very important, so it’s very tempting to say “well it meets that definition for most purposes, so we should think of it as mostly AGI”.
In a complex space, even a crisp definition will become complex and therefore vague.
This is known as the descriptivist view of language I believe. And I think it’s simply correct. Words are used in complex ways that differ between people. Using them “correctly” means using them in ways that your intended audience will understand what you mean. Unfortunately it’s not possible to do this perfectly. I think this is just how brains and the world work.
I think, at a certain point, a phase is self-explanatory enough that you can write off a certain share of definitions as just being wrong. AGI exists as a term in contrast to Narrow AI, which means “AI that can do some things as well as a human, but not others”. For either term to have any semantic significance at all, AGI can’t have exceptions.
Using your example, a system that was very useful for doing three important things would be “a good narrow AI system”, or just “a useful AI tool”. No additional information is conveyed by calling it “AGI”.
Sure, people are going to disagree about exactly where to draw the boundaries of AGI, and yet AGI remains a useful concept, even if we can’t fully agree on what counts as it. That’s in part why I think the idea of “minimum viable AGI” is useful, to be able to point to this space where we’re not so far along that everyone will agree it’s AGI, but far enough that thinking of it as AGI is reasonable.
To put a finer point on it: AGI isn’t a thing (it’s a cloud of things) so debating whether “it’s here” is a waste of time. What’s important is discussing what’s actually here (which you do) and the implications of whatever-this-is being here. Which you leave implicit.
FWIW I think your perspective is a little different since you’re dealing with these systems mostly in the area they were most designed for, coding. Their competence falls off pretty steeply in other areas.
As for whether they’re AGI: mu.
Isn’t the definition of AGI the opposite of that? A computer program that is capable of any task that a human is capable of doing via operating a computer seems like a fairly strict definition, and certainly seems to preclude it being a “cloud of things”.
You could make it stricter by applying a percentile to it explicitly[1], so that you can rigorously test it. “In all tasks we were able to define, AGI must perform better on the target metric than 50 percent of human participants”. Still, either way, It’s a binary thing. If you can define a computer operation task that a typical human can do but an AI system can’t, then it’s unambiguously narrow rather than general AI.
(I’d argue that the percentile is already there, it’s just implied rather than stated outright because of the can of sociological worms it opens)
The problem is that there are many definitions of AGI in circulation. Different people use the term different ways. Because you’re speaking to many people when you write online, AGI doesn’t mean one thing. It doesn’t even mean a strictly defined thing for most of the people in the conversation, because they’ve absorbed much of the meaning from context. That’s how brains work.
If we all agreed on a definition like that, then we wouldn’t have this problem and it would be crisp.
Except in practice you’d find that there were a few things it couldn’t do yet, but those things don’t seem very important, so it’s very tempting to say “well it meets that definition for most purposes, so we should think of it as mostly AGI”.
In a complex space, even a crisp definition will become complex and therefore vague.
This is known as the descriptivist view of language I believe. And I think it’s simply correct. Words are used in complex ways that differ between people. Using them “correctly” means using them in ways that your intended audience will understand what you mean. Unfortunately it’s not possible to do this perfectly. I think this is just how brains and the world work.
I think, at a certain point, a phase is self-explanatory enough that you can write off a certain share of definitions as just being wrong. AGI exists as a term in contrast to Narrow AI, which means “AI that can do some things as well as a human, but not others”. For either term to have any semantic significance at all, AGI can’t have exceptions.
Using your example, a system that was very useful for doing three important things would be “a good narrow AI system”, or just “a useful AI tool”. No additional information is conveyed by calling it “AGI”.