> now that AI systems are already increasingly general
I want to point out that if you tried to quantify this properly, the argument falls apart (at least in my view). “All AI systems are increasingly general” would be false; there are still many useful but very narrow AI systems. “Some AI systems” would be true, but this highlights the continuing usefulness of the distinction.
One way out of this would be to declare that only LLMs and their ilk count as “AI” now, with more narrow machine learning just being statistics or something. I don’t like this because of the commonality of methods between LLMs and the rest of ML; it is still deep learning (and in many cases, transformers), just scaled down in every way.
Hmm I guess that didn’t properly convey what I meant. More like, LLMs are general in a sense, but in a very weird sense where they can perform some things at a PhD level while simultaneously failing at some elementary-school level problems. You could say that they are not “general as in capable of learning widely runtime” but “general as in they can be trained to do an immensely wide set of tasks at training-time”.
And this is then a sign that the original concept is no longer very useful, since okay LLMs are “general” in a sense. But probably if you’d told most people 10 years ago that “we now have AIs that you can converse with in natural language about almost any topic, they’re expert programmers and they perform on a PhD level in STEM exams”, that person would not have expected you to follow up with “oh and the same systems repeatedly lose at tic-tac-toe without being able to figure out what to do about it”.
So now we’re at a point where it’s like “okay our AIs are ‘general’, but general does not seem to mean what we thought it would mean, instead of talking about whether AIs are ‘general’ or not we should come up with more fine-grained distinctions like ‘how good are they at figuring out novel stuff at runtime’, and maybe the whole thing about ‘human-level intelligence’ does not cut reality at the joints very well and we should instead think about what capabilities are required to make an AI system dangerous”.
> now that AI systems are already increasingly general
I want to point out that if you tried to quantify this properly, the argument falls apart (at least in my view). “All AI systems are increasingly general” would be false; there are still many useful but very narrow AI systems. “Some AI systems” would be true, but this highlights the continuing usefulness of the distinction.
One way out of this would be to declare that only LLMs and their ilk count as “AI” now, with more narrow machine learning just being statistics or something. I don’t like this because of the commonality of methods between LLMs and the rest of ML; it is still deep learning (and in many cases, transformers), just scaled down in every way.
Hmm I guess that didn’t properly convey what I meant. More like, LLMs are general in a sense, but in a very weird sense where they can perform some things at a PhD level while simultaneously failing at some elementary-school level problems. You could say that they are not “general as in capable of learning widely runtime” but “general as in they can be trained to do an immensely wide set of tasks at training-time”.
And this is then a sign that the original concept is no longer very useful, since okay LLMs are “general” in a sense. But probably if you’d told most people 10 years ago that “we now have AIs that you can converse with in natural language about almost any topic, they’re expert programmers and they perform on a PhD level in STEM exams”, that person would not have expected you to follow up with “oh and the same systems repeatedly lose at tic-tac-toe without being able to figure out what to do about it”.
So now we’re at a point where it’s like “okay our AIs are ‘general’, but general does not seem to mean what we thought it would mean, instead of talking about whether AIs are ‘general’ or not we should come up with more fine-grained distinctions like ‘how good are they at figuring out novel stuff at runtime’, and maybe the whole thing about ‘human-level intelligence’ does not cut reality at the joints very well and we should instead think about what capabilities are required to make an AI system dangerous”.