One caveat is that you could imagine building a system, such as an oracle, that does not itself ensure some objective is met, but which helps humans achieve that objective (e.g. via suggesting plans that humans can execute if desired). In that case, the combined system of AI + humans is goal-directed,
Note that a human+AI system can only execute the humans goals if the human can control the AI—it is posited on the control problem being solved in a good enough way.
We think behavioral goal-directedness (reliably achieving outcomes) is clearly favored by economic pressures.
“Reliably” implies control as well. Commercial pressures tend towards directable power. Modern cars are much more powerful than early ones, but also safer, because safety technology didn’t just keep up with increased power, it overtook it.
Note that a human+AI system can only execute the humans goals if the human can control the AI—it is posited on the control problem being solved in a good enough way.
“Reliably” implies control as well. Commercial pressures tend towards directable power. Modern cars are much more powerful than early ones, but also safer, because safety technology didn’t just keep up with increased power, it overtook it.