Contrary to this post, utility-based models of humans work fine. As explained here, you can construct utility-based models to represent any computable agent.
Of course you may have to account for humans caring about things other than money.
You can also construct deontological models for any utility-based agent (the right action is always that which maximises utility). Virtue ethics is a bit hazier, but you can certainly have ethics where maximising utility is virtuous.
And when people purport to explain human behaviour on small bets through risk aversion on a utility function, they do not say “here is a two billion line utility function that encodes behaviour” but “people have utility functions concave in money”.
Just because you can construct a utility-based model to represent an agent doesn’t meant that the model so constructed is at all useful or at all informative about what is actually going on.
To get the utility function for an arbitrary agent, as described in the paper your link links to, you have to know what the agent would do in any possible situation. At which point, there’s nothing left that the utility function can tell you.
So, to recap, the claim in this post was that “expected utility maximization” is “completely wrong as a descriptive theory of how humans behave”.
It seems like an ungrounded claim to me. It is not the application of expected utility maximization to humans that is wrong, but the application of it to humans in this post.
I don’t agree with the claim that general-purpose utility-based models are not “useful” or “informative”. One point of them is that they allow comparison of the goals of arbitrary agents within a common framework. If you don’t yet see how that might be useful, you should probably think about the issue some more.
In this example, the utility-based model shows that humans are doing something other than maximizing their future wealth. What that is is not immediately obvious—but they may, for example, be treating small transactions as a means of signalling to others about their behaviour when stakes are larger. Or they may be more interested in how many times they gain. What it doesn’t mean is that they are not acting as expected utility maximizers.
Contrary to this post, utility-based models of humans work fine. As explained here, you can construct utility-based models to represent any computable agent.
Of course you may have to account for humans caring about things other than money.
You can also construct deontological models for any utility-based agent (the right action is always that which maximises utility). Virtue ethics is a bit hazier, but you can certainly have ethics where maximising utility is virtuous.
And when people purport to explain human behaviour on small bets through risk aversion on a utility function, they do not say “here is a two billion line utility function that encodes behaviour” but “people have utility functions concave in money”.
Just because you can construct a utility-based model to represent an agent doesn’t meant that the model so constructed is at all useful or at all informative about what is actually going on.
To get the utility function for an arbitrary agent, as described in the paper your link links to, you have to know what the agent would do in any possible situation. At which point, there’s nothing left that the utility function can tell you.
So, to recap, the claim in this post was that “expected utility maximization” is “completely wrong as a descriptive theory of how humans behave”.
It seems like an ungrounded claim to me. It is not the application of expected utility maximization to humans that is wrong, but the application of it to humans in this post.
I don’t agree with the claim that general-purpose utility-based models are not “useful” or “informative”. One point of them is that they allow comparison of the goals of arbitrary agents within a common framework. If you don’t yet see how that might be useful, you should probably think about the issue some more.
In this example, the utility-based model shows that humans are doing something other than maximizing their future wealth. What that is is not immediately obvious—but they may, for example, be treating small transactions as a means of signalling to others about their behaviour when stakes are larger. Or they may be more interested in how many times they gain. What it doesn’t mean is that they are not acting as expected utility maximizers.