Just because you can construct a utility-based model to represent an agent doesn’t meant that the model so constructed is at all useful or at all informative about what is actually going on.
To get the utility function for an arbitrary agent, as described in the paper your link links to, you have to know what the agent would do in any possible situation. At which point, there’s nothing left that the utility function can tell you.
So, to recap, the claim in this post was that “expected utility maximization” is “completely wrong as a descriptive theory of how humans behave”.
It seems like an ungrounded claim to me. It is not the application of expected utility maximization to humans that is wrong, but the application of it to humans in this post.
I don’t agree with the claim that general-purpose utility-based models are not “useful” or “informative”. One point of them is that they allow comparison of the goals of arbitrary agents within a common framework. If you don’t yet see how that might be useful, you should probably think about the issue some more.
In this example, the utility-based model shows that humans are doing something other than maximizing their future wealth. What that is is not immediately obvious—but they may, for example, be treating small transactions as a means of signalling to others about their behaviour when stakes are larger. Or they may be more interested in how many times they gain. What it doesn’t mean is that they are not acting as expected utility maximizers.
Just because you can construct a utility-based model to represent an agent doesn’t meant that the model so constructed is at all useful or at all informative about what is actually going on.
To get the utility function for an arbitrary agent, as described in the paper your link links to, you have to know what the agent would do in any possible situation. At which point, there’s nothing left that the utility function can tell you.
So, to recap, the claim in this post was that “expected utility maximization” is “completely wrong as a descriptive theory of how humans behave”.
It seems like an ungrounded claim to me. It is not the application of expected utility maximization to humans that is wrong, but the application of it to humans in this post.
I don’t agree with the claim that general-purpose utility-based models are not “useful” or “informative”. One point of them is that they allow comparison of the goals of arbitrary agents within a common framework. If you don’t yet see how that might be useful, you should probably think about the issue some more.
In this example, the utility-based model shows that humans are doing something other than maximizing their future wealth. What that is is not immediately obvious—but they may, for example, be treating small transactions as a means of signalling to others about their behaviour when stakes are larger. Or they may be more interested in how many times they gain. What it doesn’t mean is that they are not acting as expected utility maximizers.