Kelly Criterion is for Cowards
[More leisurely version of this post in video form here]
Imagine a wealthy, eccentric person offers to play a game with you. You flip 2 fair coins, and if either lands TAILS, you win. If both land HEADS, you lose. This person is willing to wager any amount of money you like on this game (at even-money). So whatever you stake, there’s a ¼ chance you lose it and a ¾ chance you double it.
There’s no doubt about the integrity of the game—no nasty tricks, it’s exactly what it looks like, and whoever loses really will have to honour the bet.
How much money would you put down? It’s very likely your initial answer to this question is far too low.
The Von Neumann-Morgenstern theorem says we should act as if we are maximising the expected value of some utility function—and when it comes to this decision the only meaningful variable our decision affects is how much money we have.
So to arrive at our correct bet size we just need to figure out the shape of our utility vs wealth curve.
This curve is different for everyone, but in general we can say it should be upward sloping (more money is better than less) and get less steep as we move to the right (diminishing returns of each additional dollar)
When we think about an upward sloping curve with diminishing returns, the obvious choice that comes to mind is the log. i.e.
Where
We don’t have to choose the log here, (there’s nothing actually special about it), but it’s a reasonable place to start our analysis from. Sizing our bets to maximise the log of our wealth is also known as the Kelly Criterion
Intuitively, log utility says every doubling of money leads to same incremental increase in wellbeing (so the happiness bump going from living on 50k to 100k a year is the same as going from 100k to 200k is the same as going from 200k to 400k etc.)
This won’t be exactly your preferences, but hopefully this feels “close enough” for you to be interested in the implications.
If we start with a wealth of
So our Expected utility is:
Which is maximised when
So Kelly Criterion says you should bet half of everything you have on the outcome of this coinflip game.
This strikes most people as being insanely agressive—but this is paradoxical because the assumptions underpinning the analysis are actually wildly conservative.
As your wealth approaches zero, the log goes to negative infinity. So log utility is saying that going bankrupt is not just bad, but infinitely bad (akin to being tortured for eternity).
This is a bit overdramatic—A young American doctor who just finished med school with a small amount of student debt is not “poor” in any meaningful sense, and she’s certainly not experiencing infinitely negative wellbeing.
For anyone in the class of “people who might see this post”—when we compute our wealth
If we did go bankrupt, we’d still have a safety net to fall back on (friends/family/government services)
Almost all of us are below retirement age and still have a lot of future earnings to look forward to[1]
If you re-do the analysis but treat W as being just 20% higher due to unrealised future earnings, the optimal betting fraction according to log-utility jumps up to 60%.
Or if you think the peak of your career is still ahead of you—and model things so that your future earnings exceed your current net worth—the answer becomes bet every single cent you have on this game.
This is deeply unintuitive. And my stance is that in this idealized situation, where you really can be certain of a huge edge, it’s our intuitions that are wrong.
I honestly would go fully all-in on a game like this (if anyone thinks I’m joking and has a lot of money, please try me 😉)
But don’t go and start betting huge sums of money on my account just yet—in slightly more realistic settings there are forces which push us back closer to the realm of “normal” risk aversion. I plan to cover this in my next post.
- ^
Pretending for now that AI isn’t about to transform the world beyond recognition...
How would you respond to Zvi and SimonM, who argue that even the Kelly bet is reckless?
SimonM’s analysis is great—a hugely important point he covers well is that in the real world you don’t know exactly what your edge is.
And whenever you’re considering betting in a context like a highly liquid prediction market—you’re playing a negative sum game against competent adversaries. So for most people not only are they wrong about the size of their edge, but their edge is actually negative.
By default people have a bias towards risk aversion, which helps cancel out a bias towards overconfidence they can beat the market.
But I think it’s still important to notice that, if you’re someone with a safety net and/or future earnings to look forward to—you should in principle be willing to tolerate very high levels of risk as long as the expected value is positive (while still admitting the EV of day-trading options is negative).
The two world models:
“There’s lots of alpha to be found, but my utility as a function of money is very curved and I’m terrified of losses”
“My utility as a funciton of money is relatively flat given my substantial future earnings and safety net, but I don’t actually have an edge when it comes to financial markets”
Both advise against making reckless bets in financial markets. I claim for most of us number 2 is closer to the truth.
The practical implication—in situations where you get the opportunity to take +EV risks and you’re not subject to adversarial efficient market dynamics—you basically want to load up on risk to a degree way higher than what feels comfortable .
When a game is asymetric, non-zero-sum, and you don’t have competent adversaries trying hard to screw you—you really will find legimitate edges. This is where it’s appropriate to be extremely bold. And most of the time these prosaic “bets” have an inherently capped, relatively small bet size anyway.
Stuff like
Spending money on cleaners/babysitters to free up time to work on speculative on side projects
Hiring a tutor
Spending time+money to attend a networking event
Posting online under your real name
Spend money on products which may or may not work (e.g. a gadget that’s meant to help you sleep)
Asking for more money before accepting a job offer
Asking to pay less money before signing a contract to buy a house
Even mundane stuff like asking for an introduction or telling a joke that might not land
In my view the optimal policy for privlidged young people is usually avoidance of stuff like prediction markets (unless an absurd opportunity arises), while at the same time seeking to take an abmormally high level of +EV risk in positive-sum non-EMH domains.
I suspect that this is not due to the problem as written, but due to similar real-world situations and the ease of being overconfident and underestimating p(loss). As Zvi put it, “Executing real trades is necessary to get worthwhile data and experience (italics mine—S.K.) Tiny quantities work. A small bankroll with this goal must be preserved and variance minimized. Kelly is far too aggressive.”
as a one time bet sure, but there are obviously bankroll considerations for iterated. kelly criterion is about growing your bankroll the fastest you can without busting so you can’t take advantage of the iterated bet any more.
There are different ways to exactly define “growing your bankroll the fastest without going bust”. Making that your goal and then choosing the one specific mathematical instantiation of the concept that leads to Kelly betting—is equivalent to declaring you have log utility.
If you have any utility function other than log(W) then you don’t maximize your expected utility by Kelly betting—even if there’s many repeated bets.
You do, however, maximise your long-term growth rate, regardless of any utility function. If you consistently overbet Kelly [1] , you will consistently impoverish yourself, even while your “expected” utility is skyrocketing, but confined to an increasingly tiny sliver of probability space. In the limit, the probability of being in profit goes to zero. The “expectation” in this situation is the oppposite of what you can expect to see.
ETA: by a sufficiently large amount, a sufficiently modest overbet being merely suboptimal.
When you say “growth rate” you’re picking out one specific metric from an infintie set of other reasonable choices!
, the ideal you seek to maximise the expected value of!
Of course if you focus on ratios, you tautologically end up maximising the logarithm!
But you’re still doing a lot of normative work when you make the “rate of growth”, defined as
If you run a bunch of simulations exploring the results achieved by different betting strategies and then compare the average results using the geometric mean then yes, Kelly betting wins.
If you plot some simulations on a chart and make your y-axis logarithmic then yes, the chart seems to show how Kelly beats everything almost always. But by doing the analysis this way you’ve already baked in the conclusion.
In an analogous way to how odds ratios are isomorphic to probabilities—but in some cases cause less confusion when we try reason about them—I think it’s way more productive to reason about your utility function than it is to talk about growth rates
When we start to talk along the lines of “Yeah this maximises expected utility—but” this is a sign that there’s a type error somewhere. There is no but—your utility function is definitionally the thing you want to maximise the expected value of.
Right, maximizing median wealth is also the same as ‘maximize the chance that I have the most bankroll to spare for any better betting opportunities that come along in the future’ afaik
What does “most” mean? If you start with W and go all-in on the coin flip game—you end up with 2W with probability
2W is the “most” you can possibly end up with to spare when the next betting opportunity comes along.
So by that framing, going all-in is what maximises the chance you have the most bankroll to spare.
(I’m not pretending that is a good argument I just made—I’m just pointing out that these desiderata we’re trying to express in natural language have lots of room for interpretation when it comes to turning them into math—and the only sensible way to resolve this ambiguitiy is to start with your utility function and derive the risk taking policy from that, not the other way around!)
unknown number of rounds before this bet expires and or other bets come along.
Maybe it would help if you construct an explicit concrete model? You’re welcome to define what future opportunities will come along after this bet (or even a distribution of possible future opportunities)
Are you claiming that after you build this concrete model—Kelly betting will emerge as the objectively optimal strategy regardless of the agent’s preferences and regardless of whether we add a safety net/income stream into the picture?
Or are you making a softer (seemingly irrelevant) claim about what happens with geometric means/average growth rates when we don’t account for safety nets and income streams?
I think log utility mischaracterizes people’s utility wrt money function in some ways, but disagree with the reasons you give. The main departures afaict is that real utility follows mutltiple sigmoids around decision relevant amounts of money (eg having runway vs living paycheck to paycheck, minimal retirement money, large lifestyle change money) and the fact that real betting opportunities are heterogeneous rather than continuous eg since high conviction bets come along at unpredictable intervals, many people have a barbell strategy of mostly index funds plus a few higher conviction concentrated bets.
There’s also that we can’t treat reachable utility via spending money as the same between people, but that’s outside scope.
this is a liquidity problem right? guessing you are young and have most of your career ahead of you? if you run out of money you probably move back in with your parents, it kind of sucks but it’s not that bad?
you would be less cavalier if you were betting from the net present value of your human capital (and truly understood the gravity of that bet).
Yep, you understood correctly!
and it’s reasonable to adopt a risk taking policy close to Kelly.
If you’re 65 years old and already sitting on wealth that’s an order of magnitude more valuable than your future earnings/your safety net—then your utility function plausibly is close to
But for the majority of people who’ll see this post—the value of their safety net and their human capital is substantial compared to their current wealth—so taking this into account pushes them to bet more agressively than Kelly prescribes (which imo is interesting given how agressive Kelly already feels intuitively).
Personally, I’m 33 years old (that still counts as young?), married with 2 kids so far.
And yes, my family and I could probably move in with my sister or my wife’s parents if we were struggling. I agree if I was betting from the NPV of my human capital (rather than just what my wife and I currently own) I wouldn’t go all in on this bet.
What’s your situation and how much would you bet?
28, married, 1 kid, expecting to have to support my parents in their retirement at least a bit, well past full kelly due to illiquid ai equity but not jazzed about that level of concentration. net worth as fraction of npv of human capital depends on timelines and how you choose to price some of the assets, those concerns do at least anticorrelate though.
I don’t think so. ~Everyone can see that this is super obviously a good deal.
The bankrupt is infinitely bad assumption is implicit in the log-wealth model and definitely get some people confused when they first encountered it. It is definitely a wildly conservative assumption that is wrong, I don’t trust Kelly at like $0-$100. But it feels to me that this pitfall is not as big of an issue as you’re saying, because in most realistic situations 1. the utility of having low digit money won’t affect the Kelly calculation much for mediocre deals (intuition; didn’t math it out), 2. you want to discount on Kelly (or any modified-Kelly) anyways because of uncertainty in the probability/returns of the bets you’re making.
> I don’t think so. ~Everyone can see this is super obviously a good deal
There’s a difference between thinking it’s a good deal and being willing to bet 50% of your net worth on it. Do you think if we polled a bunch of people, almost everyone would say they’d bet at least as agressivlely as Kelly prescribes? (If you do, want to make a meta-bet about this? :P )
I pointed out the negative infinity thing not just to make fun of the singualarity at 0, but to gesture at the fact that in general we should consider our utility functions as being way less curved than a logarithm.
In similar vein to how log utility treats the difference between being flat out broke and only having $100 as infinite, wheras to you the difference is negligible—it’s also wildly exagerating the difference between 100 and 1,000, and 1,000 vs 10,000.
It’s not just below $100 that you souldn’t trust kelly. If you have an anual salary of 100k you shoudln’t trust kelly anywhere below like 500k!
You’re right that the issue is that in the real world it’s very easy to be wrong about the size of your edge and most of the time any major edge comes along with a capped bet size. But distinguishing between “I’m very risk averse” and “I’m not very risk averse but it’s very hard to eek out big edges” is useful—and does have practical implications in some cases (going to write about this in future post!)
In response to “Why” reacts:
Let’s think about another hypothetical. Say you were forced to choose between 2 bad options:
Option 1: You lose all your wealth save for $10,000
Option 2: We flip a coin, heads you lose all your wealth save for $100 dollars, tails you lose all your wealth save for 1 million dollars (and if you currently are worth less than a million you actually recieve money to get your net worth up to this figure)
We’re not leaning on the singularity at 0 here.
Log utility says these options have the same value. But obviously someone with a safety net and healthy future earning potential should way prefer option 2
Ok I understand what you’re saying now. My reaction is that we should just add the expected current value of all the money you will make in the future (maybe discounted and also conditional on you making the bet), to your current wealth and then kelly as if you have that much money. This seems like a valid critique of how people Kelly bet currently, but I still disagree the correct response to this hypothetical is “we should consider our utility functions as being way less curved than a logarithm”. I think people do genuinely value wealth roughly logarithmically so if you don’t make any money in the future then Kelly is correct.
I do understand “if you have an anual salary of 100k you shoudln’t trust kelly anywhere below like 500k” now and I agree.
I don’t mind whether we frame it as utility curves being flatter than logarithmic, or as logarithmic curves but shiffted to the left - both are approximation of the real function regardless. (And mathematically I don’t think there’s even a difference… The slope of ln(x) is 1/x so shifting it left does make it flatter)
The high level point is that both framings seem to imply we should bet far more aggressively that how Kelly Criterion is typically applied
Not making a real money bet because it seems difficult to operationalize / flush out the details enough that I would think I have enough edge. People imo will correctly give a more conservative number if they think the question is realistic, and they will give closer to 50% if they think it is an idealized scenario where all they have is money. But I will say 30% of people would give >=50% if they understand the scenario as mathematical.
Also, I was saying something weaker. I disagreed on “This strikes most people as being insanely agressive”. I am saying that people would, after being told the correct answer (i.e. in retrospect), think/tell you that the mathematically correct answer is not insanely aggressive. Even if 50% is higher than what they said would personally bet, I think most people would not say and would disagree that it is insanely aggressive.
I think if you start asking people this question, even educated people, you’ll be supprised!
While the scenario is idalized in the sense of you can know the payoffs and odds with certianty—there’s no need to stipulate “all they have is money”—they can have a complex utility function involving a thousand inputs, as long as the only input that changes based on the bet they make is money.
Yeah, I’m thinking about stuff like “do I sell my house and my two dogs to make this bet?”
I calculate f = 5⁄8, not 1⁄2.
Never mind, for some reason I thought you were being offered a 2:1 payout as well as lopsided odds; that doesn’t appear to be the case.