The difference between goal models and preference orderings/utility functions is analogous to the difference between the intensional and extensional meanings of an expression (e.g., a linguistic expression or a piece of code). The extensional meaning is a black-box-y input-output mapping. The intensional meaning is something more like a program that “actually maps” the input to the output.[1]
To take the classic example of Frege: “morning star” and “evening star” have the same extension (the planet Venus) but different intensions (“the bright star-thing that you see in the morning” vs “the bright star-thing that you see in the evening”). They point at the same thing, but point at the thing in different ways. In the case of language, Frege called the intensional meaning “sense”, and the extensional sense “reference”.
In programming, a function with all its source code provides the intensional meaning, whereas the extensional meaning is just the lookup table obtained by (often merely hypothetical) evaluating it on all valid inputs.
Given a goal model, you can sometimes pop out of it a preference ordering or an equivalence class of utility functions, at least for a specific subdomain in which you want to act, or only a partial[2]/approximate preference ordering or an approximate equivalence class of utility functions (given that you’re a computationally bounded entity).
One interesting place, in which it seems to me like you strictly need something like goal models, is handling ontological crises.[3] A goal model with its internal “logic” and perhaps “meta-logic” explaining how various parts of your goal system relate to each other and why and how this should, in general, work, gives you more of a capacity to restore the value system to an equilibrium when something breaks, e.g., when it turns out that some important thing that was load-bearing for your values turns out not to exist and this throws an internal error (e.g., see: rescuing the utility function).
Or at least handling them in full generality, as I expect there to be toy examples, where you don’t need a goal model to handle an ontological crisis well.
The difference between goal models and preference orderings/utility functions is analogous to the difference between the intensional and extensional meanings of an expression (e.g., a linguistic expression or a piece of code). The extensional meaning is a black-box-y input-output mapping. The intensional meaning is something more like a program that “actually maps” the input to the output.[1]
To take the classic example of Frege: “morning star” and “evening star” have the same extension (the planet Venus) but different intensions (“the bright star-thing that you see in the morning” vs “the bright star-thing that you see in the evening”). They point at the same thing, but point at the thing in different ways. In the case of language, Frege called the intensional meaning “sense”, and the extensional sense “reference”.
In programming, a function with all its source code provides the intensional meaning, whereas the extensional meaning is just the lookup table obtained by (often merely hypothetical) evaluating it on all valid inputs.
Given a goal model, you can sometimes pop out of it a preference ordering or an equivalence class of utility functions, at least for a specific subdomain in which you want to act, or only a partial[2]/approximate preference ordering or an approximate equivalence class of utility functions (given that you’re a computationally bounded entity).
One interesting place, in which it seems to me like you strictly need something like goal models, is handling ontological crises.[3] A goal model with its internal “logic” and perhaps “meta-logic” explaining how various parts of your goal system relate to each other and why and how this should, in general, work, gives you more of a capacity to restore the value system to an equilibrium when something breaks, e.g., when it turns out that some important thing that was load-bearing for your values turns out not to exist and this throws an internal error (e.g., see: rescuing the utility function).
Not the phrasing Frege would use to describe it, but it checks.
“Partial” in the sense that it is a part/subset of the full order, not necessarily that it may have incomparable pairs of elements.
Or at least handling them in full generality, as I expect there to be toy examples, where you don’t need a goal model to handle an ontological crisis well.