Real-world intentionality seem to be a separate problem from making a system that would figure out how to solve problems (mathematically defined problems), and likely, a very hard problem (in the sense of being very difficult to mathematically define).
I think I disagree with you, depending on what you mean here. Limited “intentionality” (as in Dennett’s intentional stance) shows up as soon as you have a system that selects the best of several actions using prediction algorithms and an evaluation function: a chess engine like Rybka in the context of a game can be modeled well as selecting good moves. That intentionality is limited because the system has a tightly constrained set of actions and only evaluates consequences using a very limited model of the world, but these things can be scaled up. Robust problem-solving and prediction algorithms capable of solving arbitrary problems would be terribly hard, but intentionality would not be much of a further problem. On the other hand if we talk about very narrowly defined problems then systems capable of doing well on those will not be able to address the very economically and scientifically important mass of ill-specified problems.
Also, the separability of action and analysis is limited: Rybka can evaluate opening moves, looking ahead a fair ways, but it cannot provide a comprehensive strategy to win a game (carrying on to the end) without the later moves. You could put a “human in the loop” who would use Rybka to evaluate particular moves, and then make the actual move, but at the cost of adding a bottleneck (humans are slow, cannot follow thousands or millions of decisions at once). The more experimentation and interactive learning are important, the less viable the detached analytical algorithm.
I think I disagree with you, depending on what you mean here. Limited “intentionality” (as in Dennett’s intentional stance) shows up as soon as you have a system that selects the best of several actions using prediction algorithms and an evaluation function: a chess engine like Rybka in the context of a game can be modeled well as selecting good moves. That intentionality is limited because the system has a tightly constrained set of actions and only evaluates consequences using a very limited model of the world, but these things can be scaled up. Robust problem-solving and prediction algorithms capable of solving arbitrary problems would be terribly hard, but intentionality would not be much of a further problem. On the other hand if we talk about very narrowly defined problems then systems capable of doing well on those will not be able to address the very economically and scientifically important mass of ill-specified problems.
Also, the separability of action and analysis is limited: Rybka can evaluate opening moves, looking ahead a fair ways, but it cannot provide a comprehensive strategy to win a game (carrying on to the end) without the later moves. You could put a “human in the loop” who would use Rybka to evaluate particular moves, and then make the actual move, but at the cost of adding a bottleneck (humans are slow, cannot follow thousands or millions of decisions at once). The more experimentation and interactive learning are important, the less viable the detached analytical algorithm.