I think any AI that could pass a pure-language Turing test, could have such a narrow AI bolted on.
That’s precisely why the origin of the AI is so important—it’s only if the general AI developed these skills without bolt-ons, that we can be sure it’s a real general intelligence.
That’s a sufficient condition, but I don’t think it’s a necessary one—it’s not only then that we’ll know it has real GI (general intelligence). For instance it might have had, or adapted, narrow modules for those particular purposes before its GI became powerful enough.
Also, human GI is barely powerful enough to write the algorithms for new modules like that. In some areas we still haven’t succeeded; in others it took us hundreds of person-years of R&D. Humans are an example that with good enough narrow modules, the GI part doesn’t have to be… well, superhumanly intelligent.
Yes—my test criteria are unfair to the AI (arguably the Turing test is as well). I can’t think of methods that have good specificity as well as sensitivity.
We do have a general intelligence. Without it we’d be just smart chimps.
But in most fields where we have a dedicated module—visual recognition, spatial modeling, controlling our bodies, speech recognition and processing and creation—our GI couldn’t begin to replace it. And we haven’t been able to easily create equivalent algorithms (and the problems aren’t just computing power).
That’s precisely why the origin of the AI is so important—it’s only if the general AI developed these skills without bolt-ons, that we can be sure it’s a real general intelligence.
That’s a sufficient condition, but I don’t think it’s a necessary one—it’s not only then that we’ll know it has real GI (general intelligence). For instance it might have had, or adapted, narrow modules for those particular purposes before its GI became powerful enough.
Also, human GI is barely powerful enough to write the algorithms for new modules like that. In some areas we still haven’t succeeded; in others it took us hundreds of person-years of R&D. Humans are an example that with good enough narrow modules, the GI part doesn’t have to be… well, superhumanly intelligent.
Yes—my test criteria are unfair to the AI (arguably the Turing test is as well). I can’t think of methods that have good specificity as well as sensitivity.
On the other hand, we’re perfectly capable of acquiring skills that we didn’t evolve to possess, e.g., flying planes.
We do have a general intelligence. Without it we’d be just smart chimps.
But in most fields where we have a dedicated module—visual recognition, spatial modeling, controlling our bodies, speech recognition and processing and creation—our GI couldn’t begin to replace it. And we haven’t been able to easily create equivalent algorithms (and the problems aren’t just computing power).