Even at the risk of sounding like someone who’s arguing by definition, I don’t think that a system without any strongly goal-directed behavior qualifies as an AGI; at best it’s an early prototype on the way towards AGI. Even an oracle needs the goal of accurately answering questions in order to do anything useful, and proposals of “tool AGI” sound just incoherent to me.
Of course, that raises the question of whether a heuristic soup approach can be used to make strongly goal-directed AGI. It’s clearly not impossible, given that humans are heuristic soups themselves; but it might be arbitrarily difficult, and it could turn out that a more purely math-based AGI was far easier to make both tractable and goal-oriented. Or it could turn out that it’s impossible to make a tractable and goal-oriented AGI by the math route, and the heuristic soup approach worked much better. I don’t think anybody really knows the answer to that, at this point, though a lot of people have strong opinions one way or the other.
Even at the risk of sounding like someone who’s arguing by definition, I don’t think that a system without any strongly goal-directed behavior qualifies as an AGI; at best it’s an early prototype on the way towards AGI. Even an oracle needs the goal of accurately answering questions in order to do anything useful, and proposals of “tool AGI” sound just incoherent to me.
Of course, that raises the question of whether a heuristic soup approach can be used to make strongly goal-directed AGI. It’s clearly not impossible, given that humans are heuristic soups themselves; but it might be arbitrarily difficult, and it could turn out that a more purely math-based AGI was far easier to make both tractable and goal-oriented. Or it could turn out that it’s impossible to make a tractable and goal-oriented AGI by the math route, and the heuristic soup approach worked much better. I don’t think anybody really knows the answer to that, at this point, though a lot of people have strong opinions one way or the other.