Without this metaphysics there isn’t an obvious motivation for a theory of decisions.
There isn’t really any need for “choices”, except in the sense of “internal states are also inputs that affect the outputs”. Sufficiently complex agents can have one or more decision theories encoded into their internal state, and it seems like being able to communicate, evaluate, and update such theories would at least sometimes be a useful trait.
It’s easy to imagine agents that can’t do that, but it’s more interesting (and more reflective of “intelligence”) to assume they can.
There isn’t really any need for “choices”, except in the sense of “internal states are also inputs that affect the outputs”. Sufficiently complex agents can have one or more decision theories encoded into their internal state, and it seems like being able to communicate, evaluate, and update such theories would at least sometimes be a useful trait.
It’s easy to imagine agents that can’t do that, but it’s more interesting (and more reflective of “intelligence”) to assume they can.