A utility function is only a method of approximating an agent’s behavior. If I wanted to make a precise description, I wouldn’t bother “agent-izing” the object in the first place. The rock falls vs. the rock wants to fall is a meaningless distinction. In that sense, nothing “has a utility function”, since utility functions aren’t ontologically fundamental.
When I say “does X have a utility function”, I mean “Is it useful and intuitive to predict the behavior of X by ascribing agency to it and using a utility function”. So the real question is, do humans deviate from the model to such an extent that the model should not be used? It certainly doesn’t seem like the model describes anything else better than it describes humans—although as AI improves that might change.
So even if I agree that humans don’t technically “have a utility function” anymore than any other object, I would say that if anything on this planet is worth ascribing agency and using a utility function to describe, it’s animals. So if humans and other animals don’t have a utility function, who does?
So if humans and other animals don’t have a utility function, who does?
No one yet. We’re working on it.
So the real question is, do humans deviate from the model to such an extent that the model should not be used?
Yes. You will find it much more fruitful to predict most humans as causal systems (including youself), and if you wanted to model human behavior with a utility function, you’d either have a lot of error, or a lot of trouble adding enough epicycles.
As I said though, VNM isn’t useful descriptively; if you use it like that, it’s tautological, and doesn’t really tell you anything. Where it shines is in design of agenty systems; “If we had these preferences, what would that imply about where we would steer the future” (which worlds are ranked high) “if we want to steer the future over there, what decision architecture do we need?”.
OK, I think we’re on the same page now.
A utility function is only a method of approximating an agent’s behavior. If I wanted to make a precise description, I wouldn’t bother “agent-izing” the object in the first place. The rock falls vs. the rock wants to fall is a meaningless distinction. In that sense, nothing “has a utility function”, since utility functions aren’t ontologically fundamental.
When I say “does X have a utility function”, I mean “Is it useful and intuitive to predict the behavior of X by ascribing agency to it and using a utility function”. So the real question is, do humans deviate from the model to such an extent that the model should not be used? It certainly doesn’t seem like the model describes anything else better than it describes humans—although as AI improves that might change.
So even if I agree that humans don’t technically “have a utility function” anymore than any other object, I would say that if anything on this planet is worth ascribing agency and using a utility function to describe, it’s animals. So if humans and other animals don’t have a utility function, who does?
No one yet. We’re working on it.
Yes. You will find it much more fruitful to predict most humans as causal systems (including youself), and if you wanted to model human behavior with a utility function, you’d either have a lot of error, or a lot of trouble adding enough epicycles.
As I said though, VNM isn’t useful descriptively; if you use it like that, it’s tautological, and doesn’t really tell you anything. Where it shines is in design of agenty systems; “If we had these preferences, what would that imply about where we would steer the future” (which worlds are ranked high) “if we want to steer the future over there, what decision architecture do we need?”.