Kelly Criterion is for Cowards

[More leisurely version of this post in video form here]

Imagine a wealthy, eccentric person offers to play a game with you. You flip 2 fair coins, and if either lands TAILS, you win. If both land HEADS, you lose. This person is willing to wager any amount of money you like on this game (at even-money). So whatever you stake, there’s a ¼ chance you lose it and a ¾ chance you double it.

There’s no doubt about the integrity of the game—no nasty tricks, it’s exactly what it looks like, and whoever loses really will have to honour the bet.

How much money would you put down? It’s very likely your initial answer to this question is far too low.

The Von Neumann-Morgenstern theorem says we should act as if we are maximising the expected value of some utility function—and when it comes to this decision the only meaningful variable our decision affects is how much money we have.

So to arrive at our correct bet size we just need to figure out the shape of our utility vs wealth curve.

image.png

This curve is different for everyone, but in general we can say it should be upward sloping (more money is better than less) and get less steep as we move to the right (diminishing returns of each additional dollar)

When we think about an upward sloping curve with diminishing returns, the obvious choice that comes to mind is the log. i.e.



Where is the total amount of wealth you have (including the value of all your property/​investments)

We don’t have to choose the log here, (there’s nothing actually special about it), but it’s a reasonable place to start our analysis from. Sizing our bets to maximise the log of our wealth is also known as the Kelly Criterion

Intuitively, log utility says every doubling of money leads to same incremental increase in wellbeing (so the happiness bump going from living on 50k to 100k a year is the same as going from 100k to 200k is the same as going from 200k to 400k etc.)

This won’t be exactly your preferences, but hopefully this feels “close enough” for you to be interested in the implications.

If we start with a wealth of then bet a fraction of that, , on this coinflip game then in worlds where we win we’ll end up with and in worlds where we lose we’ll have

So our Expected utility is:

Which is maximised when = 0.5

So Kelly Criterion says you should bet half of everything you have on the outcome of this coinflip game.

This strikes most people as being insanely agressive—but this is paradoxical because the assumptions underpinning the analysis are actually wildly conservative.

As your wealth approaches zero, the log goes to negative infinity. So log utility is saying that going bankrupt is not just bad, but infinitely bad (akin to being tortured for eternity).

This is a bit overdramatic—A young American doctor who just finished med school with a small amount of student debt is not “poor” in any meaningful sense, and she’s certainly not experiencing infinitely negative wellbeing.

For anyone in the class of “people who might see this post”—when we compute our wealth before plugging it into the Kelly Criterion we ommit 2 extremely important components:

  1. If we did go bankrupt, we’d still have a safety net to fall back on (friends/​family/​government services)

  2. Almost all of us are below retirement age and still have a lot of future earnings to look forward to[1]

If you re-do the analysis but treat W as being just 20% higher due to unrealised future earnings, the optimal betting fraction according to log-utility jumps up to 60%.

Or if you think the peak of your career is still ahead of you—and model things so that your future earnings exceed your current net worth—the answer becomes bet every single cent you have on this game.

This is deeply unintuitive. And my stance is that in this idealized situation, where you really can be certain of a huge edge, it’s our intuitions that are wrong.

I honestly would go fully all-in on a game like this (if anyone thinks I’m joking and has a lot of money, please try me 😉)

But don’t go and start betting huge sums of money on my account just yet—in slightly more realistic settings there are forces which push us back closer to the realm of “normal” risk aversion. I plan to cover this in my next post.

  1. ^

    Pretending for now that AI isn’t about to transform the world beyond recognition...