He seems to be arguing around the definition of an agent using BDI or similar logic; BDI stands for beliefs-desires-intentions, and the intentions are goals. In this framework (more accurately, set of frameworks) agents necessarily, by definition have goals. More generally, though, I have difficulty envisioning anything that could realistically be called an “agent” that does not have goals. Without goals you would have a totally reactive intelligence, but it could not do anything without being specifically instructed, like a modern computer.
ADDED: Thinking further, such a “goal-less” intelligence couldn’t even try to foresee questions in order to have answers ready, or take any independent action. You seem to be arguing for an un-intelligent, in any real meaning of the word, intelligence.
He seems to be arguing around the definition of an agent using BDI or similar logic; BDI stands for beliefs-desires-intentions, and the intentions are goals. In this framework (more accurately, set of frameworks) agents necessarily, by definition have goals. More generally, though, I have difficulty envisioning anything that could realistically be called an “agent” that does not have goals. Without goals you would have a totally reactive intelligence, but it could not do anything without being specifically instructed, like a modern computer.
ADDED: Thinking further, such a “goal-less” intelligence couldn’t even try to foresee questions in order to have answers ready, or take any independent action. You seem to be arguing for an un-intelligent, in any real meaning of the word, intelligence.