I don’t think Dutch book arguments matter in practice. An easy way to avoid being Dutch booked is to refuse bets being offered to you by people you don’t trust.
Not that I fully support utility functions as a useful concept, but having a consistent one also keeps you from dutch booking yourself. You can interpret any decision as a bet using utility and people often make decisions that cost them effort and energy but leave them in the same place where they started. So it’s possible trying to figure out one’s utility function can help prevent eg anxious looping behavior.
Sure, if you’re right about your utility function. The failure mode I’m worried about is people believing they know what their utility function is and being wrong, maybe disastrously wrong. Consistency is not a virtue if, in reaching for consistency, you make yourself consistent in the wrong direction. Inconsistency can be a hedge against making extremely bad decisions.
The idea is that the universe offers you Dutch-book situations and you make and take bets on uncertain outcomes implicitly.
That said, I concur with your basic point: universal overarching utility functions—not just small ones for a given situation, but a single large one for you as a human—are something humans don’t, and I think can’t, do—and realising how mathematically helpful it would be if they did still doesn’t mean they can, and trying to turn oneself into an expected utility maximiser is unlikely to work.
(And, I suspect, will merely leave you vulnerable to everyday human-level exploits—remember that the actual threat model we evolved in is beating other humans, and as long as we’re dealing with humans we need to deal with humans.)
The idea is that the universe offers you Dutch-book situations
But does it in fact do that? To the extent that you believe that humans are bad Bayesians, you believe that the environment in which humans evolved wasn’t constantly Dutch-booking them, or that if it was then humans evolved some defense against this which isn’t becoming perfect Bayesians.
I do suspect that our thousand shards of desire being contradictory and not resolving is selected for, in that we are thus money-pumped into propagating our genes.
You are of course correct about the concrete scenario of being Dutch Booked in a hypothetical gamble (and I am not a gambler for reasons similar to this: we all know the house always wins!). However, if we’re going to discard the Dutch Book criterion, then we need to replace it with some other desiderata for preventing self-contradictory preferences that cause no-win scenarios.
Even if your own mind comes preprogrammed with decision-making algorithms that can go into no-win scenarios under some conditions, you should recognize those as a conscious self-patching human being, and consciously employ other algorithms that won’t hurt themselves.
I mean, let me put it this way, probabilities aside, if you make decisions that form a cyclic preference ordering rather than even forming a partial ordering, isn’t there something rather severely bad about that?
I don’t think Dutch book arguments matter in practice. An easy way to avoid being Dutch booked is to refuse bets being offered to you by people you don’t trust.
Not that I fully support utility functions as a useful concept, but having a consistent one also keeps you from dutch booking yourself. You can interpret any decision as a bet using utility and people often make decisions that cost them effort and energy but leave them in the same place where they started. So it’s possible trying to figure out one’s utility function can help prevent eg anxious looping behavior.
Sure, if you’re right about your utility function. The failure mode I’m worried about is people believing they know what their utility function is and being wrong, maybe disastrously wrong. Consistency is not a virtue if, in reaching for consistency, you make yourself consistent in the wrong direction. Inconsistency can be a hedge against making extremely bad decisions.
The idea is that the universe offers you Dutch-book situations and you make and take bets on uncertain outcomes implicitly.
That said, I concur with your basic point: universal overarching utility functions—not just small ones for a given situation, but a single large one for you as a human—are something humans don’t, and I think can’t, do—and realising how mathematically helpful it would be if they did still doesn’t mean they can, and trying to turn oneself into an expected utility maximiser is unlikely to work.
(And, I suspect, will merely leave you vulnerable to everyday human-level exploits—remember that the actual threat model we evolved in is beating other humans, and as long as we’re dealing with humans we need to deal with humans.)
But does it in fact do that? To the extent that you believe that humans are bad Bayesians, you believe that the environment in which humans evolved wasn’t constantly Dutch-booking them, or that if it was then humans evolved some defense against this which isn’t becoming perfect Bayesians.
I do suspect that our thousand shards of desire being contradictory and not resolving is selected for, in that we are thus money-pumped into propagating our genes.
You are of course correct about the concrete scenario of being Dutch Booked in a hypothetical gamble (and I am not a gambler for reasons similar to this: we all know the house always wins!). However, if we’re going to discard the Dutch Book criterion, then we need to replace it with some other desiderata for preventing self-contradictory preferences that cause no-win scenarios.
Even if your own mind comes preprogrammed with decision-making algorithms that can go into no-win scenarios under some conditions, you should recognize those as a conscious self-patching human being, and consciously employ other algorithms that won’t hurt themselves.
I mean, let me put it this way, probabilities aside, if you make decisions that form a cyclic preference ordering rather than even forming a partial ordering, isn’t there something rather severely bad about that?
Why?
Do you want to program an agent to put you in a no-win scenario? Do you want to put yourself in a no-win scenario?