I think something you aren’t mentioning is that at least part of the reason the definition of AGI has gotten so decoupled from its original intended meaning is that the current AI systems we have are unexpectedly spiky.
We knew that it was possible to create narrow ASI or AGI for a while; chess engines did this in 1997. We thought that a system which could do the broad suite of tasks that GPT is currently capable of doing would necessarily be able to do the other things on a computer that humans are able to do. This didn’t really happen. GPT is already superhuman in some ways, and maybe superhuman for about ~50% of economically viable tasks that are done via computer, but it still makes mistakes at very other basic things.
It’s weird that GPT can name and analyze differential equations better than most people with a math degree, but be unable to correctly cite a reference. We didn’t expect that.
Another difficult thing about defining AGI is that we actually expect better than “median human level” performance, but not necessarily in an unfair way. Most people around the globe don’t know the rules of chess, but we would expect AGI to be able to play at roughly the ~1000 elo level. Let’s define AGI as being able to perform every computer task at the level of the median human who has been given one month of training. We haven’t hit that milestone yet. But we may well blow past superhuman performance on a few other capabilities before we get to that milestone.
Yeah, spikiness has been an issue, but the floor is starting to get mopped up. The “unable to correctly cite a reference” thing isn’t quite fair anymore, though even current SOTA systems aren’t reliable in that regard.
The point about needing background is good, though the AI might need specialized training/instructions for specialized tasks in the same way a human would. There’s no way it could know a particular organization’s classified operating procedures from the factory, for example. Defining (strong) AGI as being able to perform every computer task at the level of the median human who has been given appropriate training (once it’s had the same training a human would get, if necessary) seems sensible. (I guess you could argue that that’s technically covered under the simpler definition.)
I think something you aren’t mentioning is that at least part of the reason the definition of AGI has gotten so decoupled from its original intended meaning is that the current AI systems we have are unexpectedly spiky.
We knew that it was possible to create narrow ASI or AGI for a while; chess engines did this in 1997. We thought that a system which could do the broad suite of tasks that GPT is currently capable of doing would necessarily be able to do the other things on a computer that humans are able to do. This didn’t really happen. GPT is already superhuman in some ways, and maybe superhuman for about ~50% of economically viable tasks that are done via computer, but it still makes mistakes at very other basic things.
It’s weird that GPT can name and analyze differential equations better than most people with a math degree, but be unable to correctly cite a reference. We didn’t expect that.
Another difficult thing about defining AGI is that we actually expect better than “median human level” performance, but not necessarily in an unfair way. Most people around the globe don’t know the rules of chess, but we would expect AGI to be able to play at roughly the ~1000 elo level. Let’s define AGI as being able to perform every computer task at the level of the median human who has been given one month of training. We haven’t hit that milestone yet. But we may well blow past superhuman performance on a few other capabilities before we get to that milestone.
Yeah, spikiness has been an issue, but the floor is starting to get mopped up. The “unable to correctly cite a reference” thing isn’t quite fair anymore, though even current SOTA systems aren’t reliable in that regard.
The point about needing background is good, though the AI might need specialized training/instructions for specialized tasks in the same way a human would. There’s no way it could know a particular organization’s classified operating procedures from the factory, for example. Defining (strong) AGI as being able to perform every computer task at the level of the median human who has been given appropriate training (once it’s had the same training a human would get, if necessary) seems sensible. (I guess you could argue that that’s technically covered under the simpler definition.)