How about saying that the Bayesian doesn’t have to offer any bets, but must accept a side of any two sided bets offered (even by someone who knows more).
So if you see the result of the coin and offer me either side of a 90:10 bet, I would update based on my beliefs about you and why you would offer that bet, and then I pick whichever side is profitable. If after updating my odds are exactly 90:10, then I am happy to pick either side.
The fact that an agent has chosen to offer the bet, as opposed to the universe, is important in this scenario. If they are trying to make money off you, then the way to do that is to offer an unbalanced bet on the expectation that you will take the wrong side. So for example, if you think you have inside information, but they know that is actually unreliable.
The problem is that you have to always play when they want, whilst the other person only has to sometimes play.
How about no, because I prefer my stability and I don’t want to track random bets on stuff I don’t care about?
Apply marginal utility and a 50⁄50 coin with the opportunity to bet a dollar, and you’ve got 50% chance to, say, gain 9.9998 points and 50% chance to lose 10 points. Why bother playing?
The only reasons to play are is if an option is discounted (4x payout for heads and 1.5x payout on tails on a fair coin), if you don’t care about the winnings but about playing the game itself, or if there’s a threshold to reach (e.g. if I had 200 dollars then I could do payoff something else which would avoid the deferred interest from coming into play, saving me 1000 dollars, so I would take a 60% chance to lose 100 dollars because those extra 100 dollars are worth not 100 but 1000 to me).
Plus there’s always epsilon—“the coin falls on its side” or other variations.
I’m not suggesting that people actually do this, just that this is a sensible assumption to make when laying the mathematical foundation of rationality.
Yeah. It wouldn’t be as strong in practice (neither nature nor people are in the habit of offering two-sided bets) but as a theoretical constraint it seems to work as well.
Not with the payoffs given by de Finetti. For example, there’s no way to play the roulette so it becomes an “anti-roulette”, giving you a slight edge instead of the casino. Nature usually gives you a choice between doing X (accepting a one-sided bet as is) or not doing X. You don’t always have the option of doing “anti-X” (taking the other side of the bet, with the risks and payoffs exactly reversed).
How about saying that the Bayesian doesn’t have to offer any bets, but must accept a side of any two sided bets offered (even by someone who knows more).
So if you see the result of the coin and offer me either side of a 90:10 bet, I would update based on my beliefs about you and why you would offer that bet, and then I pick whichever side is profitable. If after updating my odds are exactly 90:10, then I am happy to pick either side.
The fact that an agent has chosen to offer the bet, as opposed to the universe, is important in this scenario. If they are trying to make money off you, then the way to do that is to offer an unbalanced bet on the expectation that you will take the wrong side. So for example, if you think you have inside information, but they know that is actually unreliable.
The problem is that you have to always play when they want, whilst the other person only has to sometimes play.
So I’m not sure if this works.
How about no, because I prefer my stability and I don’t want to track random bets on stuff I don’t care about?
Apply marginal utility and a 50⁄50 coin with the opportunity to bet a dollar, and you’ve got 50% chance to, say, gain 9.9998 points and 50% chance to lose 10 points. Why bother playing?
The only reasons to play are is if an option is discounted (4x payout for heads and 1.5x payout on tails on a fair coin), if you don’t care about the winnings but about playing the game itself, or if there’s a threshold to reach (e.g. if I had 200 dollars then I could do payoff something else which would avoid the deferred interest from coming into play, saving me 1000 dollars, so I would take a 60% chance to lose 100 dollars because those extra 100 dollars are worth not 100 but 1000 to me).
Plus there’s always epsilon—“the coin falls on its side” or other variations.
I’m not suggesting that people actually do this, just that this is a sensible assumption to make when laying the mathematical foundation of rationality.
Sure, but what Pimgd is pointing out is that it does not model rational behavior very well. Don’t build a mathematical framework on shaky foundations.
Yeah. It wouldn’t be as strong in practice (neither nature nor people are in the habit of offering two-sided bets) but as a theoretical constraint it seems to work as well.
Isn’t nature always in the habit of offering two-sided bets? Like, you can do one thing or the other.
Not with the payoffs given by de Finetti. For example, there’s no way to play the roulette so it becomes an “anti-roulette”, giving you a slight edge instead of the casino. Nature usually gives you a choice between doing X (accepting a one-sided bet as is) or not doing X. You don’t always have the option of doing “anti-X” (taking the other side of the bet, with the risks and payoffs exactly reversed).