This assumes that both utility functions are implemented with perfect reason, unlimited intellect, and no interference from emotion.
Can I instead offer [Logan] bets where he chooses how much of his money to put in, and he still puts in all but a penny?
It appears to an outside observer that “Logan has a utility function that’s logarithmic in money. He’ll bet 20% of his bankroll every time, and his wealth will grow exponentially.” I’d posit that Logan’s internal narrative about gambling, which manifests as appearing to be the stated utility function, is much more like “I don’t like the risk of going broke so I’m going to bet in a way that seems unlikely to blow all my money on any one bet that might lose it”.
Considering the emotional context of Logan’s behavior with money, I think it’s actually quite unlikely that you could persuade him to make any bet that will leave him forced to bet all his remaining money in the subsequent round to stay in the game if he loses it. I’m not sure what math words apply this lookahead that a human Logan would perform to say “that’s a tempting bet but if I lose it I’m screwed in the next round and I don’t want to be screwed”, but that’s a type of thought and behavior that I think your utility function modeling neglects to account for.
If Logan was perfectly intelligent and believed himself to be so, he might behave in a perfectly rational manner even when all but 1 cent of his money was at stake. But I don’t think you could introduce me to any human in the world who both is perfectly intelligent, and believes that they are. There are people who erroneously believe they’re perfectly intelligent, but they rarely believe in the way that a perfectly rational person would be expected to. There are highly rational people who know their intellectual limitations, but one trait of that rationality is planning ahead and considering that their appraisal of the odds of any gamble could be inaccurate, and keeping a financial safety net in case they make some mistake.
In short, I think I’ve entirely missed the point of why it’s useful to speculate about the behavior of hypothetical people whose behavior differs so significantly from what we see in actual people.
In short, I think I’ve entirely missed the point of why it’s useful to speculate about the behavior of hypothetical people whose behavior differs so significantly from what we see in actual people.
Note that I called Linda and Logan agents, not people. I’m not entirely confident that that no person would act like Linda and Logan, but surely no human person would.
I kinda feel like you’re asking “why is this branch of math useful at all?” and that’s fair enough, but I’m happy for this particular post not to try to answer it. (And I’m not going to try to answer it in the comments either, but maybe someone else will.)
Ah, thanks for clarifying. You’re writing for an audience who has their own reasons for wanting to speculate about agents instead of people, and I lack such reasons. That’s why I missed the point :)
This assumes that both utility functions are implemented with perfect reason, unlimited intellect, and no interference from emotion.
It appears to an outside observer that “Logan has a utility function that’s logarithmic in money. He’ll bet 20% of his bankroll every time, and his wealth will grow exponentially.” I’d posit that Logan’s internal narrative about gambling, which manifests as appearing to be the stated utility function, is much more like “I don’t like the risk of going broke so I’m going to bet in a way that seems unlikely to blow all my money on any one bet that might lose it”.
Considering the emotional context of Logan’s behavior with money, I think it’s actually quite unlikely that you could persuade him to make any bet that will leave him forced to bet all his remaining money in the subsequent round to stay in the game if he loses it. I’m not sure what math words apply this lookahead that a human Logan would perform to say “that’s a tempting bet but if I lose it I’m screwed in the next round and I don’t want to be screwed”, but that’s a type of thought and behavior that I think your utility function modeling neglects to account for.
If Logan was perfectly intelligent and believed himself to be so, he might behave in a perfectly rational manner even when all but 1 cent of his money was at stake. But I don’t think you could introduce me to any human in the world who both is perfectly intelligent, and believes that they are. There are people who erroneously believe they’re perfectly intelligent, but they rarely believe in the way that a perfectly rational person would be expected to. There are highly rational people who know their intellectual limitations, but one trait of that rationality is planning ahead and considering that their appraisal of the odds of any gamble could be inaccurate, and keeping a financial safety net in case they make some mistake.
In short, I think I’ve entirely missed the point of why it’s useful to speculate about the behavior of hypothetical people whose behavior differs so significantly from what we see in actual people.
Note that I called Linda and Logan agents, not people. I’m not entirely confident that that no person would act like Linda and Logan, but surely no human person would.
I kinda feel like you’re asking “why is this branch of math useful at all?” and that’s fair enough, but I’m happy for this particular post not to try to answer it. (And I’m not going to try to answer it in the comments either, but maybe someone else will.)
Ah, thanks for clarifying. You’re writing for an audience who has their own reasons for wanting to speculate about agents instead of people, and I lack such reasons. That’s why I missed the point :)