If we want a robot that can navigate mazes, we could put some known pathfinding/search algorithms into it.
Or we could put a neural network in it and run it through thousands of trials with slowly increasing levels of difficulty.
In the first case it is engineered-reliable, in the second case it is learning.
On rereading the opening: “searching possible chains of causality, across learned domains, in order to find actions leading to a future ranked high in a preference ordering”
I feel like you skipped over the concept of learning. Yes we can build maze and chess searchers, because we have learned the domain.
How do we engineer something to be reliably better than us at learning? (To the point that we trust it to use it’s learning to construct it’s preference ordering.)
If we want a robot that can navigate mazes, we could put some known pathfinding/search algorithms into it.
Or we could put a neural network in it and run it through thousands of trials with slowly increasing levels of difficulty.
In the first case it is engineered-reliable, in the second case it is learning.
On rereading the opening: “searching possible chains of causality, across learned domains, in order to find actions leading to a future ranked high in a preference ordering”
I feel like you skipped over the concept of learning. Yes we can build maze and chess searchers, because we have learned the domain.
How do we engineer something to be reliably better than us at learning? (To the point that we trust it to use it’s learning to construct it’s preference ordering.)