There’s a good reason why humans are interested in people’s motivations—they are genuinely useful for understanding another system’s behaviour. The same idea illustrates why knowing a system’s utility function is interesting.
That doesn’t follow. The reason why we find it useful to know people’s motivations is because they are capable of a very wide range of behavior. With such a wide range of behavior, we need a way to quickly narrow down the set of things we will expect them to do. Knowing that they’re motivated to achieve result R, we can then look at just the set of actions or events that are capable of bringing about R.
Given the huge set of things humans can do, this is a huge reduction in the search space.
OTOH, if I want to predict the behavior of a thermostat, it does not help to know the utility function you have imputed to it, because this would not significantly reduce the search space compared to knowing its few pre-programmed actions. It can only do a few things in the first place, so I don’t need to think in terms of “what are all the ways it can achieve R?”—the thermostat’s form already tells me that.
Nevertheless, despite my criticism of this parallel, I think you have shed some light on when it is useful to describe a system in terms of a utility function, at least for me.
That doesn’t follow. The reason why we find it useful to know people’s motivations is because they are capable of a very wide range of behavior. With such a wide range of behavior, we need a way to quickly narrow down the set of things we will expect them to do. Knowing that they’re motivated to achieve result R, we can then look at just the set of actions or events that are capable of bringing about R.
Given the huge set of things humans can do, this is a huge reduction in the search space.
OTOH, if I want to predict the behavior of a thermostat, it does not help to know the utility function you have imputed to it, because this would not significantly reduce the search space compared to knowing its few pre-programmed actions. It can only do a few things in the first place, so I don’t need to think in terms of “what are all the ways it can achieve R?”—the thermostat’s form already tells me that.
Nevertheless, despite my criticism of this parallel, I think you have shed some light on when it is useful to describe a system in terms of a utility function, at least for me.
See also