“Utility Maximizer” exists in the map, not the territory. It’s something we can apply to model the behaviour of things in the territory. As in all cases, models make a trade-off between simplicity and accuracy.
Some entities are particularly well modelled (by me) as carrying out a strategy of “Maximize [X]” where [X] is a short description of some outcome.
(The classic examples of “Stockfish” being well modelled by “Achieve wins in chess” comes to mind. Someone might well model a company as executing a strategy of “Maxmize your profits” or a politician as executing a strategy of “Maximize your popularity”.)
This isn’t perfect, obviously. We might need to add some extra information. For example, we can describe a chess player as executing “Win chess” but with an extra variable of “ELO = 1950″ which describes the power of that utility maximizer. Likewise, you might model a doctor as executing a strategy of “Cure patients” but subject to a limited set of knowledge. This isn’t really what people mean by “irrational” though, since these are mostly just limitations.
What really makes an entity “irrational” is when your model of it contains pretty much any other kind of behaviour.For example, those Go bots whose behaviour is well modelled as “Win at Go, ELO = Superhuman, EXCEPT behave as if you think cyclic patterns are unbeatable”. In that case, the Go engine is behaving irrationally under our model of the world.
(Another classic example: someone who mostly has consistent preferences, which can be simply described by a utility function, but also prefers apples to bananas, bananas to oranges, and oranges to apples. This puts an epicycle in our model if we have to model their fruit-swapping behaviour.)
An entity is irrational under some model to the degree that that there are more epicycles on its behaviour. A person might appear like a utility maximizer (and thus, very rational) to a much stupider person (who would not be able to model their behaviour in any other way), but very unlike a utility maximizer to a superintelligent AI. Since most humans don’t vary in intelligence by that much, most of the time we’re working under similar models, so we can just talk about entities being irrational.
Caveat:
We might want to talk about which agents are more or less rational in general, which then means we’re making a claim that our models reflect some aspect of reality. A more (or less) rational agent is then one which is overall considered more (or less) rational under a wide variety of high-accuracy low-complexity models.
“Utility Maximizer” exists in the map, not the territory. It’s something we can apply to model the behaviour of things in the territory. As in all cases, models make a trade-off between simplicity and accuracy.
Some entities are particularly well modelled (by me) as carrying out a strategy of “Maximize [X]” where [X] is a short description of some outcome.
(The classic examples of “Stockfish” being well modelled by “Achieve wins in chess” comes to mind. Someone might well model a company as executing a strategy of “Maxmize your profits” or a politician as executing a strategy of “Maximize your popularity”.)
This isn’t perfect, obviously. We might need to add some extra information. For example, we can describe a chess player as executing “Win chess” but with an extra variable of “ELO = 1950″ which describes the power of that utility maximizer. Likewise, you might model a doctor as executing a strategy of “Cure patients” but subject to a limited set of knowledge. This isn’t really what people mean by “irrational” though, since these are mostly just limitations.
What really makes an entity “irrational” is when your model of it contains pretty much any other kind of behaviour. For example, those Go bots whose behaviour is well modelled as “Win at Go, ELO = Superhuman, EXCEPT behave as if you think cyclic patterns are unbeatable”. In that case, the Go engine is behaving irrationally under our model of the world.
(Another classic example: someone who mostly has consistent preferences, which can be simply described by a utility function, but also prefers apples to bananas, bananas to oranges, and oranges to apples. This puts an epicycle in our model if we have to model their fruit-swapping behaviour.)
An entity is irrational under some model to the degree that that there are more epicycles on its behaviour. A person might appear like a utility maximizer (and thus, very rational) to a much stupider person (who would not be able to model their behaviour in any other way), but very unlike a utility maximizer to a superintelligent AI. Since most humans don’t vary in intelligence by that much, most of the time we’re working under similar models, so we can just talk about entities being irrational.
Caveat:
We might want to talk about which agents are more or less rational in general, which then means we’re making a claim that our models reflect some aspect of reality. A more (or less) rational agent is then one which is overall considered more (or less) rational under a wide variety of high-accuracy low-complexity models.