How do we define GOFAI here? If we’re contrasting the search/learning based approaches with the sorts of approaches which leverage specialized knowledge in particular domains (as done by Sutton in his The Bitter Lesson), then if the AGI learns anything particular about a field, isn’t that leveraging “specialized knowledge in particular domains”? [1]
Its not clear to me that should be included as AI research, so its not obvious to me the question makes sense. For example, Alpha-zero was not GOFAI, but was its training process “doing” GOFAI, since that training process was creating an expert system using (autonomously gathered) information about the specialized domain of Go?
Maybe we want to say that in order for it to count as AI research, the AI needs to end up creating some new agent or something. Then the argument is more about whether the AI would want to spin up specialized sub-agents or tool-AIs to help it act in certain domains, then we can ask whether when its trying to improve the sub-agents, it will try to hand-code specialized knowledge or general principles.
As with today this seems very much a function of the level of generality of the domain. Note that GOFAI and improvements to GOFAI haven’t really died, they’ve just gotten specialized. See compilers, compression algorithms, object oriented programming, the disease ontology project, and the applications of many optimization & control algorithms.
But note this is different from how most use the term “GOFAI”, by which they mean symbolic AI in contrast to neuro-inspired AI or connectionism. In this case, I expect that the AI we get will not necessarily want to follow either of these two philosophical principles. It will understand how & why DNNs work, eliminate their flaws, and amplify their strengths, and have the theorems (or highly probable heuristic arguments) to prove why its approach is sound.
How do we define GOFAI here? If we’re contrasting the search/learning based approaches with the sorts of approaches which leverage specialized knowledge in particular domains (as done by Sutton in his The Bitter Lesson), then if the AGI learns anything particular about a field, isn’t that leveraging “specialized knowledge in particular domains”? [1]
Its not clear to me that should be included as AI research, so its not obvious to me the question makes sense. For example, Alpha-zero was not GOFAI, but was its training process “doing” GOFAI, since that training process was creating an expert system using (autonomously gathered) information about the specialized domain of Go?
Maybe we want to say that in order for it to count as AI research, the AI needs to end up creating some new agent or something. Then the argument is more about whether the AI would want to spin up specialized sub-agents or tool-AIs to help it act in certain domains, then we can ask whether when its trying to improve the sub-agents, it will try to hand-code specialized knowledge or general principles.
As with today this seems very much a function of the level of generality of the domain. Note that GOFAI and improvements to GOFAI haven’t really died, they’ve just gotten specialized. See compilers, compression algorithms, object oriented programming, the disease ontology project, and the applications of many optimization & control algorithms.
But note this is different from how most use the term “GOFAI”, by which they mean symbolic AI in contrast to neuro-inspired AI or connectionism. In this case, I expect that the AI we get will not necessarily want to follow either of these two philosophical principles. It will understand how & why DNNs work, eliminate their flaws, and amplify their strengths, and have the theorems (or highly probable heuristic arguments) to prove why its approach is sound.