# Triple or nothing paradox

You are at a cas­ino. You have $1. A table offers you a game: you have to bet all your money; a fair coin will be tossed; if it lands heads, you triple your money; if it lands tails, you lose ev­ery­thing. In the first round, it is ra­tio­nal to take the bet since the ex­pected value of win­ning is$1.50, which is greater than what you started out with.

If you win the first round, you’ll have $3. In the next round, it is ra­tio­nal to take the bet again, since the ex­pected value is$4.50 which is larger than $3. If you win the sec­ond round, you’ll have$9. In the next round, it is ra­tio­nal to take the bet again, since the ex­pected value is $13.50 which is larger than$9.

You get the idea. At ev­ery round, if you won the pre­vi­ous round, it is ra­tio­nal to take the next bet.

But if you fol­low this strat­egy, it is guaran­teed that you will even­tu­ally lose ev­ery­thing. You will go home with noth­ing. And that seems ir­ra­tional.

In­tu­itively, it feels that the ra­tio­nal thing to do is to quit while you are ahead, but how do you get that pre­dic­tion out of the max­i­miza­tion of ex­pected util­ity? Or does the above anal­y­sis only feel ir­ra­tional be­cause hu­mans are loss-averse? Or is loss-aver­sion some­how op­ti­mal here?

Any­way, please dis­solve my con­fu­sion.

• Isn’t this just the St Peters­burg para­dox?

• The Wikipe­dia page has a dis­cus­sion of solu­tions. The sim­plest one seems to be “this para­dox re­lies on hav­ing in­finite time and play­ing against a cas­ino with in­finite money”. If you as­sume the cas­ino “only” has more money than any­one in the world, the ex­pected value is not that im­pres­sive.

See also the Mart­in­gale bet­ting sys­tem), which re­lies on the gam­bler hav­ing in­finite money.

• I don’t like any of the pro­posed solu­tions to that when I glanced through the SEP ar­ti­cle on it. They’re all in­sight­ful but are sidestep­ping the hy­po­thet­i­cal. Here’s my take:

Com­pute the ex­pected util­ity not of a choice BET/​NO_BET but of a de­ci­sion rule that tells you whether to bet. In this case, the OP pro­posed the rule “Always BET” which has ex­pected util­ity of 0 and is bested by the rule “BET only once” which is in turn bested by the rule “BET twice if pos­si­ble” and so on. The ‘para­dox’ then is that there is a se­quence of rules whose ex­pected earn­ings are di­verg­ing to in­finity. But then this is similar to the puz­zle “Name a num­ber; you get that much wealth.” Which num­ber do you name?

(Ac­tu­ally I think the pro­posed rule is not “Always BET” but “Always make the choice for which max­i­mizes ex­pected util­ity con­di­tional to choos­ing NO_BET on the next choice”. The fact that this strat­egy is flawed seems rea­son­able: you’re com­put­ing the ex­pec­ta­tion as­sum­ing you choose NO_BET next but don’t ac­tu­ally choose NO_BET next. Don’t count your eggs be­fore they hatch.)

• Thanks! It looks very re­lated, and is per­haps ex­actly the same. I hadn’t heard about it till now. The Stan­ford en­cy­clo­pe­dia of philos­o­phy has a good ar­ti­cle on this with differ­ent pos­si­ble re­s­olu­tions.

• No. In the St. Peters­burg setup you don’t get to choose when to quit, you only get to choose whether to play the game or not. In this game you can re­move the op­tion for the player to just keep play­ing, and force the player to pick a point af­ter which to quit, and there’s still some­thing weird go­ing on there.

• It’s very an­noy­ing try­ing to have this con­ver­sa­tion with­out down­votes. Any­way, here are some sen­tences.

1. This is not quite the St. Peters­burg para­dox; in the St. Peters­burg setup, you don’t get to choose when to quit, and the con­fu­sion is about how to eval­u­ate an op­por­tu­nity which ap­par­ently has in­finite ex­pected value. In this setup the op­tion “always con­tinue play­ing” has in­finite ex­pected value, but even if you toss it out there are still countably many op­tions left, namely “quit play­ing af­ter N vic­to­ries,” each of which has higher ex­pected value than the last, and it’s still un­clear how to pick be­tween them.

2. Utility not be­ing lin­ear in money is a red her­ring here; you can just re­place money with util­ity in the prob­lem di­rectly, as long as your util­ity func­tion is un­bounded. One re­s­olu­tion is to ar­gue that this sort of phe­nomenon sug­gests that util­ity func­tions ought to be bounded. (One way of con­cretiz­ing what it means to have an un­bounded util­ity func­tion: you have an un­bounded util­ity func­tion if and only if there is a se­quence of out­comes each of which is at least “twice as good” as the pre­vi­ous in the sense that you would pre­fer a 50% chance of the bet­ter out­come and a 50% chance of some fixed out­come to a 100% chance of the worse out­come.)

3. Think­ing about your pos­si­ble strate­gies be­fore you start play­ing this game, there are in­finitely many: for ev­ery non­nega­tive in­te­ger N, you can choose to stop play­ing af­ter N rounds, or you can choose to never stop play­ing. Each strat­egy is more valuable than the next, and the last strat­egy has in­finite ex­pected value. If you state the ques­tion in terms of util­ities, that means there’s some sense in which the naive ex­pected util­ity max­i­mizer is do­ing the right thing, if it has an un­bounded util­ity func­tion.

4. On the other hand, the foun­da­tional prin­ci­pled ar­gu­ment for tak­ing ex­pected util­ity max­i­miza­tion se­ri­ously as a (ar­guably toy) model of good de­ci­sion-mak­ing is the vNM the­o­rem, and in the setup of the vNM the­o­rem lot­ter­ies (prob­a­bil­ity dis­tri­bu­tions over out­comes) always have finite ex­pected util­ity, be­cause 1) the util­ity func­tion always takes finite val­ues; an in­finite value vi­o­lates the con­ti­nu­ity ax­iom, and 2) lot­ter­ies are only ever over finitely many pos­si­ble states of the world. In this setup, with­out a finite bound on the to­tal num­ber of rounds, the pos­si­ble states of the world are given by pos­si­ble se­quences of coin flips, of which there are un­countably many, and the lot­tery over them you need to con­sider to de­cide how good it would be to never stop play­ing in­volves all of them. So, you can ei­ther re­ject the setup be­cause the vNM the­o­rem doesn’t ap­ply to it, or re­ject the vNM the­o­rem be­cause you want to un­der­stand de­ci­sion mak­ing over in­finitely many pos­si­ble out­comes; in the lat­ter case there’s no rea­son a pri­ori to talk about ex­pected util­ity max­i­miza­tion. (This point also ap­plies to the St. Peters­burg para­dox.)

5. If you want to un­der­stand de­ci­sion mak­ing over in­finitely many pos­si­ble out­comes, you run into a much more ba­sic prob­lem which has noth­ing to do with ex­pected val­ues: sup­pose I offer you a se­quence of pos­si­ble out­comes, each of which is strictly more valuable than the pre­vi­ous one (and this can hap­pen even with a bounded util­ity func­tion as long as it takes in­finitely many val­ues, al­though, again, there’s no rea­son a pri­ori to talk about ex­pected util­ity max­i­miza­tion in this set­ting). Which one do you pick?

• Thank you for this clear and use­ful an­swer!

• The ra­tio­nal choice de­pends on your util­ity func­tion. Your util­ity func­tion is un­likely to be lin­ear with money. For ex­am­ple, if your util­ity func­tion is log (X), then you will ac­cept the first bet, be in­differ­ent to the sec­ond bet, and re­ject the third bet. Any risk-averse util­ity func­tion (i.e. any mono­ton­i­cally in­creas­ing func­tion with nega­tive sec­ond deriva­tive) reaches a point where the agent stops play­ing the game.

A VNM-ra­tio­nal agent with a lin­ear util­ity func­tion over money will in­deed always take this bet. From this, we can in­fer that lin­ear util­ity func­tions do not rep­re­sent the util­ity of hu­mans.

(EDIT: The com­ments by Satt and AlexMen­nen are both cor­rect, and I thank them for the cor­rec­tions. I note that they do not af­fect the main point, which is that ra­tio­nal agents with stan­dard util­ity func­tions over money will even­tu­ally stop play­ing this game)

• Any risk-averse util­ity func­tion (i.e. any mono­ton­i­cally in­creas­ing func­tion with nega­tive sec­ond deriva­tive) reaches a point where the agent stops play­ing the game.

Not true. It is true, how­ever, that any agent with a bounded util­ity func­tion even­tu­ally stops play­ing the game.

• Thanks for catch­ing that, I stand cor­rected.

• For ex­am­ple, if your util­ity func­tion is log (X), then you will ac­cept the first bet

Not even that. You start with $1 (util­ity = 0) and can choose between 1. walk­ing away with$1 (util­ity = 0), and

2. ac­cept­ing a lot­tery with a 50% chance of leav­ing you with $0 (util­ity = −∞) and a 50% chance of hav­ing$3 (util­ity = log(3)).

The first bet’s ex­pected util­ity is then −∞, and you walk away with the \$1.

• You are fight­ing the hy­po­thet­i­cal.

In the St Peters­burg Para­dox the cas­ino is offer­ing a fair bet, the kind that cas­inos offer. It is gen­er­ally an er­ror for hu­mans to take these.

In this sce­nario, the cas­ino is mag­i­cally tilt­ing the bet in your fa­vor. Yes, you should ac­cept that bet and keep play­ing un­til the amount is an ap­pre­cia­ble frac­tion of your net worth. But given that we are as­sum­ing the strange be­hav­ior of the cas­ino, we could let the cas­ino tilt the bet even farther each time, so that the bet has pos­i­tive ex­pected util­ity. Then the prob­lem re­ally is in­finity, not util­ity. (Even agents with un­bounded util­ity func­tions are un­likely to have them be un­bounded as a func­tion of money, but we could imag­ine a mag­i­cal wish-grant­ing ge­nie.)

• He’s not fight­ing the hy­po­thet­i­cal; he merely re­sponded to the hy­po­thet­i­cal with a weaker claim than he should have. That is, he cor­rectly claimed that re­al­is­tic agents have util­ity func­tions that grow too slowly with re­spect to money to keep bet­ting in­definitely, but this is merely a spe­cial case of the fact that re­al­is­tic agents have bounded util­ity, and thus will even­tu­ally stop bet­ting no mat­ter how great the pay­off of win­ning the next bet is.

• This is a stupid com­ment. I would down­vote it and move on, but I can’t, so I’m mak­ing this com­ment.

• I agree with this, if “this” refers to your own com­ment and not the one it replies to.

• As­sum­ing that the to­tal time it takes to make all your bets is not in­finite, this re­sults in

• Calcu­late the chance of break­ing the cas­ino, for any finite max­i­mum pay­out. It’s always non-zero, there are no in­fini­ties.

• This is not any­one’s true re­jec­tion, since no one would plan to play un­til they lost ev­ery­thing even if the cas­ino had in­finite wealth.

• It’s not a true offer, so it’s hard to pre­dict whether a re­jec­tion is true. I think I’d be will­ing to play for any amount they’d let me.

But that doesn’t mat­ter. No mat­ter where you stop, the “para­dox” doesn’t hap­pen for finite amounts.

• Money doesn’t work in a way that makes this in­ter­est­ing.

If I have one dol­lar, I ba­si­cally have noth­ing. What can I buy for one dol­lar? Well, a shot at this guys cas­ino. Why not?

Now I’m broke, ah well, but I was mostly here be­fore. OR Now I’ve got 3 dol­lars. Still can’t buy any­thing,ex­cept for this crazy cas­ino.

Now I’m broke, ah well, but I was mostly here be­fore, OR Now I’ve got 9 dol­lars. That’s a meal if I’m near a fast food joint, or an­other spin at the cas­ino which seems to love me

Now I’m broke, ah well, but I was mostly here be­fore, OR Now I’ve got 27 dol­lars. That’s a meal at a cas­ino’s prices. Do I want a burger or an­other spin?

And so on. At each level, com­pare the thrill of gam­bling with what­ever you could buy for that money. You will even­tu­ally be able to get some­thing that you value more than the chance to go back up to the table.

Always do the thing with this money that you will en­joy the most. Ini­tially that is gonna be the cas­ino, be­cause one dol­lar. Be­fore it gets to some­thing in­ter­est­ing you will lose, be­cause odds.

• I have my own ex­pla­na­tion for this but it will take time to com­press. We are im­ply­ing two differ­ent defi­ni­tions or con­texts of the word ra­tio­nal though imo which is sort of the crux of my ar­gu­ment. I think we are also con­flat­ing defi­ni­tions of time, and also con­flat­ing differ­ent defi­ni­tions of re­al­ity.

• Any­way, please dis­solve my con­fu­sion.

I think the most fun and em­piri­cal way to dis­solve this con­fu­sion would be to hold a tour­ney. Re­mem­ber the Pri­soner’s Dilemma com­pe­ti­tions that were fa­mously won, not by com­plex al­gorithms, but by sim­ple vari­a­tions on Tit-for-Tat? If some­body can host, the rules would be some­thing like this:

1. Play­ers can sub­mit scripts which take only one in­put (their cur­rent money) and pro­duce only one out­put (whether to ac­cept the bet again). The host has in­finite money since it’s just vir­tual.

2. Each script gets run N times where N isn’t told to the play­ers in ad­vance. The script with the high­est win­nings is de­clared In­ter­est­ing.

• But if you fol­low this strat­egy, it is guaran­teed that you will even­tu­ally lose ev­ery­thing. You will go home with noth­ing. And that seems ir­ra­tional.

It is not ir­ra­tional, just a case of re­vealed prefer­ence. It in­tu­itively doesn’t sound good be­cause your util­ity func­tion for money is not lin­ear: oth­er­wise you would be in­differ­ent at los­ing money. In­deed, hu­mans are more risk averse than lin­early al­lowed.

• This is a clas­sic theme that reg­u­larly comes back. I pro­pose you a differ­ent but re­lated para­dox: there’s a box with one utilon in­side. Every hour that the box stay closed you get one more utilon in­side.
When do you open it?

• Con­sider a se­quence of num­bers of the form $\\left\(1\+\\frac\{1\}\{n\}\\right\$%5En), where n are nat­u­ral num­bers. Each num­ber in the se­quence is ra­tio­nal, the limit of the se­quence is not.

Con­sider your open­ing ques­tion. As­sume that your util­ity is lin­ear in money or that the mag­i­cal cas­ino offers to triple your util­ity di­rectly. Each step in the se­quence of rounds is ra­tio­nal, the limit it­self is not.

In­fini­ties are weird.

• Sup­pose that at the be­gin­ning of the game, you de­cide to play no more than N turns. If you lose all your money by then, oh well; if you don’t, you call it a day and go home.

• After 1 turn, there’s a 12 chance that you have 3 dol­lars; ex­pected value = 32

• 2 turns, 14 chance that you have 9 dol­lars; ex­pected value = (3/​2)^2

• 3 turns, 18 chance of 27 dol­lars; E = (3/​2)^3

• 4 turns, 116 chance of 81 dol­lars; E=(3/​2)^4

• ...

• N turns, 1/​2^N chance of 3^N dol­lars; E=(3/​2)^N

So the longer you de­cide to play, the higher your ex­pected value is. But is a 1/​2^100 chance of win­ning 3^100 dol­lars re­ally bet­ter than a 12 chance of win­ning 3 dol­lars? Just be­cause the ex­pected value is higher, doesn’t mean that you should keep play­ing. It doesn’t mat­ter how high the ex­pected value is if a 1/​2^100 prob­a­bil­ity event is un­likely to hap­pen in the en­tire life­time of the Uni­verse.