I think the post is saying “if your preferences are somewhat coupled to the preferences of an expectation maximizer, then in some limit, your preferences match that expectation maximizer.”
But so what? Why should your preferences have any relation to a real-valued function of the world? If you satisfy all the axioms, your preferences are exactly expectation-maximizing for a function that vN and M tell you how to build. But if the whole point is to drop one of the axioms, why should you still expect such a function to be relevant?
(this has been said elsewhere on the thread, but not too tentatively, and not at the top level.)
The results are on the “expected” part of expected utility, not on the “utility” part. Independence is overstrong; replacing it with the somewhat coupling to an expectation maximizer is much weaker. And yet in the limit it mimics the expectation requirement, which is very useful result.
(dropping independence completely leaves you flailing all over the place)
I think the post is saying “if your preferences are somewhat coupled to the preferences of an expectation maximizer, then in some limit, your preferences match that expectation maximizer.”
But so what? Why should your preferences have any relation to a real-valued function of the world? If you satisfy all the axioms, your preferences are exactly expectation-maximizing for a function that vN and M tell you how to build. But if the whole point is to drop one of the axioms, why should you still expect such a function to be relevant?
(this has been said elsewhere on the thread, but not too tentatively, and not at the top level.)
The results are on the “expected” part of expected utility, not on the “utility” part. Independence is overstrong; replacing it with the somewhat coupling to an expectation maximizer is much weaker. And yet in the limit it mimics the expectation requirement, which is very useful result.
(dropping independence completely leaves you flailing all over the place)