If you drop enough of the axioms (e.g. the axiom of independence) from the expected utility formalisation you can represent the behaviour of any creature you care to imagine with a utility function.
At some point, you can’t call it a utility function any more.
Eventually, such a function just becomes a map between sensory inputs (including memories) and motor outputs.
Such a hypothetical function is as useless as the supposed function, in a deterministic universe, for calculating all future states of the universe from an exact knowledge of its present.
Richard, I think your first point is probably based on a misconception about the idea. It would still be a utility function—in that it would assign real-valued utilities to possible actions (before selecting the action with highest utility). Being that which is maximised during action is what the term “utility” means.
Sure, if you go beyond that, then the word “utility” might eventually become inappropriate, but that is not what is being proposed.
I can’t make much sense of the second point. Utility functions are maps between sensory inputs (including memories) and scalar values associated with possible motor outputs. They are not useless if you do things like drop the axiom of independence. Indeed, the axiom of independence is the most frequently-dropped axiom.
It is generally useful to have an abstract utility-based model that can model the behaviour of any computable creature by plugging in a utility function.
A “Utility Function” is a function from the space of (sensory inputs including memories) to the space of (functions from outputs to values).
For any given set of (sensory inputs including memories) we can that set’s image under our “Utility Function” a “utility function” and then sometimes mess up the capitalization.
Is that more clear, and/or is that what was being said?
Yes, that’s what I already quoted. But earlier in the same comment you said this:
It would still be a utility function—in that it would assign real-valued utilities to possible actions (before selecting the action with highest utility).
There you are saying that it maps actions to utilities. Hence my question.
I have something to say in response, but I can’t until I know what you actually mean, and the version that you have just reasserted makes no sense to me.
Utilities are scalar values associated with possible motor outputs (“actions” is a synonym for “motor outputs”).
The scalar values an agent needs in order to decide what to do are the ones which are associated with its possible actions. Agents typically consider their possible actions, consider their expected consequences, assign utilities to these consequences—and then select the action that is associated with the highest utility.
The inputs to the utility function are all the things the agent knows about the world—so: its sense inputs (up to and including its proposed action) and its memory contents.
At some point, you can’t call it a utility function any more.
Such a hypothetical function is as useless as the supposed function, in a deterministic universe, for calculating all future states of the universe from an exact knowledge of its present.
Richard, I think your first point is probably based on a misconception about the idea. It would still be a utility function—in that it would assign real-valued utilities to possible actions (before selecting the action with highest utility). Being that which is maximised during action is what the term “utility” means.
Sure, if you go beyond that, then the word “utility” might eventually become inappropriate, but that is not what is being proposed.
I can’t make much sense of the second point. Utility functions are maps between sensory inputs (including memories) and scalar values associated with possible motor outputs. They are not useless if you do things like drop the axiom of independence. Indeed, the axiom of independence is the most frequently-dropped axiom.
It is generally useful to have an abstract utility-based model that can model the behaviour of any computable creature by plugging in a utility function.
Hang on, a moment ago they were functions from outputs to values. Now they’re functions from inputs to values. Which are they?
Gonna take a wild stab:
A “Utility Function” is a function from the space of (sensory inputs including memories) to the space of (functions from outputs to values).
For any given set of (sensory inputs including memories) we can that set’s image under our “Utility Function” a “utility function” and then sometimes mess up the capitalization.
Is that more clear, and/or is that what was being said?
Utility functions are maps between sensory inputs (including memories) and scalar values associated with possible motor outputs.
Yes, that’s what I already quoted. But earlier in the same comment you said this:
There you are saying that it maps actions to utilities. Hence my question.
I have something to say in response, but I can’t until I know what you actually mean, and the version that you have just reasserted makes no sense to me.
Utilities are scalar values associated with possible motor outputs (“actions” is a synonym for “motor outputs”).
The scalar values an agent needs in order to decide what to do are the ones which are associated with its possible actions. Agents typically consider their possible actions, consider their expected consequences, assign utilities to these consequences—and then select the action that is associated with the highest utility.
The inputs to the utility function are all the things the agent knows about the world—so: its sense inputs (up to and including its proposed action) and its memory contents.