Suppose it turned out that humans violate the axioms of VNM rationality (and therefore don’t act like they have utility functions) because there are three valuation systems in the brain that make conflicting valuations, and
A question, probably silly:
Suppose you calculate what a person would do given every possible configuration of sensory inputs, and then construct a utility function that returns one if that thing is done and zero otherwise. Can’t we then say that any deterministic action-taking thing acts according to some utility function?
Or, even more trivially, just let the utility be constant. Then any action maximizes utility.
Edit: If you’re using utility functions to predict actions, then the constant utility function is like a maximum entropy prior, and the “every possible configuration” thing is like a hypothesis that simply lists all observations without positing some underlying pattern, so it would eventually get killed off by being more complicated than hypotheses that actually “compress” the evidence.
A question, probably silly: Suppose you calculate what a person would do given every possible configuration of sensory inputs, and then construct a utility function that returns one if that thing is done and zero otherwise. Can’t we then say that any deterministic action-taking thing acts according to some utility function?
No, although this idea pops up often enough that I have given it a name: the Texas Sharpshooter Utility Function.
There are two things glaringly wrong with it. Firstly, it is not a utility function in the sense of VNM (proof left as an exercise). Secondly, it does not describe how anything works—it is purely post hoc (hence the name).
A question, probably silly: Suppose you calculate what a person would do given every possible configuration of sensory inputs, and then construct a utility function that returns one if that thing is done and zero otherwise. Can’t we then say that any deterministic action-taking thing acts according to some utility function?
Or, even more trivially, just let the utility be constant. Then any action maximizes utility.
Edit: If you’re using utility functions to predict actions, then the constant utility function is like a maximum entropy prior, and the “every possible configuration” thing is like a hypothesis that simply lists all observations without positing some underlying pattern, so it would eventually get killed off by being more complicated than hypotheses that actually “compress” the evidence.
No, although this idea pops up often enough that I have given it a name: the Texas Sharpshooter Utility Function.
There are two things glaringly wrong with it. Firstly, it is not a utility function in the sense of VNM (proof left as an exercise). Secondly, it does not describe how anything works—it is purely post hoc (hence the name).