So an agent is already associated with goals in terms of its actual effect on its environment. Given that agent’s own future state (design) is an easily controlled part of the environment, it’s one of the things that’ll be optimized...
If you added general intelligence and consciousness to IBM Watson, where does the urge to refine or protect its Jeopardy skills come from? Why would it care if you pulled the plug on it? I just don’t see how optimization and goal protection are inherent features of general intelligence, agency or even consciousness.
He seems to be arguing around the definition of an agent using BDI or similar logic; BDI stands for beliefs-desires-intentions, and the intentions are goals. In this framework (more accurately, set of frameworks) agents necessarily, by definition have goals. More generally, though, I have difficulty envisioning anything that could realistically be called an “agent” that does not have goals. Without goals you would have a totally reactive intelligence, but it could not do anything without being specifically instructed, like a modern computer.
ADDED: Thinking further, such a “goal-less” intelligence couldn’t even try to foresee questions in order to have answers ready, or take any independent action. You seem to be arguing for an un-intelligent, in any real meaning of the word, intelligence.
If you added general intelligence and consciousness to IBM Watson, where does the urge to refine or protect its Jeopardy skills come from? Why would it care if you pulled the plug on it? I just don’t see how optimization and goal protection are inherent features of general intelligence, agency or even consciousness.
He seems to be arguing around the definition of an agent using BDI or similar logic; BDI stands for beliefs-desires-intentions, and the intentions are goals. In this framework (more accurately, set of frameworks) agents necessarily, by definition have goals. More generally, though, I have difficulty envisioning anything that could realistically be called an “agent” that does not have goals. Without goals you would have a totally reactive intelligence, but it could not do anything without being specifically instructed, like a modern computer.
ADDED: Thinking further, such a “goal-less” intelligence couldn’t even try to foresee questions in order to have answers ready, or take any independent action. You seem to be arguing for an un-intelligent, in any real meaning of the word, intelligence.