The Wikipedia page has a discussion of solutions. The simplest one seems to be “this paradox relies on having infinite time and playing against a casino with infinite money”. If you assume the casino “only” has more money than anyone in the world, the expected value is not that impressive.
I don’t like any of the proposed solutions to that when I glanced through the SEP article on it. They’re all insightful but are sidestepping the hypothetical. Here’s my take:
Compute the expected utility not of a choice BET/NO_BET but of a decision rule that tells you whether to bet. In this case, the OP proposed the rule “Always BET” which has expected utility of 0 and is bested by the rule “BET only once” which is in turn bested by the rule “BET twice if possible” and so on. The ‘paradox’ then is that there is a sequence of rules whose expected earnings are diverging to infinity. But then this is similar to the puzzle “Name a number; you get that much wealth.” Which number do you name?
(Actually I think the proposed rule is not “Always BET” but “Always make the choice for which maximizes expected utility conditional to choosing NO_BET on the next choice”. The fact that this strategy is flawed seems reasonable: you’re computing the expectation assuming you choose NO_BET next but don’t actually choose NO_BET next. Don’t count your eggs before they hatch.)
Thanks! It looks very related, and is perhaps exactly the same. I hadn’t heard about it till now. The Stanford encyclopedia of philosophy has a good article on this with different possible resolutions.
No. In the St. Petersburg setup you don’t get to choose when to quit, you only get to choose whether to play the game or not. In this game you can remove the option for the player to just keep playing, and force the player to pick a point after which to quit, and there’s still something weird going on there.
Isn’t this just the St Petersburg paradox?
The Wikipedia page has a discussion of solutions. The simplest one seems to be “this paradox relies on having infinite time and playing against a casino with infinite money”. If you assume the casino “only” has more money than anyone in the world, the expected value is not that impressive.
See also the Martingale betting system), which relies on the gambler having infinite money.
I don’t like any of the proposed solutions to that when I glanced through the SEP article on it. They’re all insightful but are sidestepping the hypothetical. Here’s my take:
Compute the expected utility not of a choice BET/NO_BET but of a decision rule that tells you whether to bet. In this case, the OP proposed the rule “Always BET” which has expected utility of 0 and is bested by the rule “BET only once” which is in turn bested by the rule “BET twice if possible” and so on. The ‘paradox’ then is that there is a sequence of rules whose expected earnings are diverging to infinity. But then this is similar to the puzzle “Name a number; you get that much wealth.” Which number do you name?
(Actually I think the proposed rule is not “Always BET” but “Always make the choice for which maximizes expected utility conditional to choosing NO_BET on the next choice”. The fact that this strategy is flawed seems reasonable: you’re computing the expectation assuming you choose NO_BET next but don’t actually choose NO_BET next. Don’t count your eggs before they hatch.)
Thanks! It looks very related, and is perhaps exactly the same. I hadn’t heard about it till now. The Stanford encyclopedia of philosophy has a good article on this with different possible resolutions.
No. In the St. Petersburg setup you don’t get to choose when to quit, you only get to choose whether to play the game or not. In this game you can remove the option for the player to just keep playing, and force the player to pick a point after which to quit, and there’s still something weird going on there.