A great deal of complexity is buried underneath the simple word “implies” here.
A perfectly consistent and rational agent A, I agree, would likely be unable to intend to perform some task T in the absence of some reasonably high level of confidence in the proposition “A can do T,” which is close enough to what we colloquially mean by the belief that A can do T. After all, such an A would routinely evaluate the evidence for and against that proposition as part of the process of validating the intention.
(Of course, such an A might attempt T in order to obtain additional evidence. But that doesn’t involve the intention to do T so much as the intention to try to do T.)
The thing is, most humans don’t validate their intentions nearly this carefully, and it’s consequently quite possible for most humans to intend to do something in the absence of a belief that they can do it. This is inconsistent, yes, but we do it all the time.
A great deal of complexity is buried underneath the simple word “implies” here.
A perfectly consistent and rational agent A, I agree, would likely be unable to intend to perform some task T in the absence of some reasonably high level of confidence in the proposition “A can do T,” which is close enough to what we colloquially mean by the belief that A can do T. After all, such an A would routinely evaluate the evidence for and against that proposition as part of the process of validating the intention.
(Of course, such an A might attempt T in order to obtain additional evidence. But that doesn’t involve the intention to do T so much as the intention to try to do T.)
The thing is, most humans don’t validate their intentions nearly this carefully, and it’s consequently quite possible for most humans to intend to do something in the absence of a belief that they can do it. This is inconsistent, yes, but we do it all the time.