I’m quite in agreement with this, and surprised that there are people imagining only babble and prune when general search for problem solving is being discussed.
I’d like to add that I think a useful approach for evaluating the generality of a problem-solving agent would be to test for heuristic generation and use. I would expect an agent which can generate new heuristics in a targeted way to be far better at generalizing to novel tasks than one which has managed to discover and reuse just a few heuristics over and over. Maybe it’s worth someone putting some thought into what a test set that could distinguish between these two cases would look like.
I’m quite in agreement with this, and surprised that there are people imagining only babble and prune when general search for problem solving is being discussed. I’d like to add that I think a useful approach for evaluating the generality of a problem-solving agent would be to test for heuristic generation and use. I would expect an agent which can generate new heuristics in a targeted way to be far better at generalizing to novel tasks than one which has managed to discover and reuse just a few heuristics over and over. Maybe it’s worth someone putting some thought into what a test set that could distinguish between these two cases would look like.