“Intentionality” fits somewhat nicely Michael Bratman’s view of intentions as partial plans: you fix some aspect of your policy to satisfy a desire, so that you are robust against noisy perturbations (noisy signals, moments of “weakness of will”, etc), can use the belief that you’re going to behave in a certain way as an input to your further decisions and beliefs (as well as other agents’ precommitments), not have to precompute everything in runtime, etc.[1]
A downside of the word is that it collides in the namespace with how “intentionality” is typically used in philosophy of mind, close to referentiality (cf. Tomasello’s shared intentionality).
Perhaps the concept of “deliberation” from LOGI is trying to point in this direction, although it covers more stuff than consulting explicit representations.
The human mind, owing to its accretive evolutionary origin, has several major distinct candidates for the mind’s “center of gravity.” For example, the limbic system is an evolutionarily ancient part of the brain that now coordinates activities in many of the other systems that later grew up around it. However, in (cautiously) considering what a more foresightful and less accretive design for intelligence might look like, I find that a single center of gravity stands out as having the most complexity and doing most of the substantive work of intelligence, such that in an AI, to an even greater degree than in humans, this center of gravity would probably become the central supersystem of the mind. This center of gravity is the cognitive superprocess which is introspectively observed by humans through the internal narrative—the process whose workings are reflected in the mental sentences that we internally “speak” and internally “hear” when thinking about a problem. To avoid the awkward phrase “stream of consciousness” and the loaded word “consciousness,” this cognitive superprocess will hereafter be referred to as deliberation.
[ … ]
Deliberation describes the activities carried out by patterns of thoughts. The patterns in deliberation are not just epiphenomenal properties of thought sequences; the deliberation level is a complete layer of organization, with complexity specific to that layer. In a deliberative AI, it is patterns of thoughts that plan and design, transforming abstract high-level goal patterns into specific low-level goal patterns; it is patterns of thoughts that reason from current knowledge to predictions about unknown variables or future sensory data; it is patterns of thoughts that reason about unexplained observations to invent hypotheses about possible causes. In general, deliberation uses organized sequences of thoughts to solve knowledge problems in the pursuit of real-world goals.
“Intentionality” fits somewhat nicely Michael Bratman’s view of intentions as partial plans: you fix some aspect of your policy to satisfy a desire, so that you are robust against noisy perturbations (noisy signals, moments of “weakness of will”, etc), can use the belief that you’re going to behave in a certain way as an input to your further decisions and beliefs (as well as other agents’ precommitments), not have to precompute everything in runtime, etc.[1]
A downside of the word is that it collides in the namespace with how “intentionality” is typically used in philosophy of mind, close to referentiality (cf. Tomasello’s shared intentionality).
Perhaps the concept of “deliberation” from LOGI is trying to point in this direction, although it covers more stuff than consulting explicit representations.
Cf. https://www.lesswrong.com/w/deliberate-practice. Wiktionary defines “deliberate” in terms of “intentional”: https://en.wiktionary.org/wiki/deliberate#Adjective.
At least that’s the Bratman-adjacent view of intention that I have.