It’s apparently not just for logarithmic utility functions. From the wikipedia page:
In most gambling scenarios, and some investing scenarios under some simplifying assumptions, the Kelly strategy will do better than any essentially different strategy in the long run.
Right, over an infinite series of bets the probability that Kelly goes ahead of a different fixed allocation goes to 1. Some caveats:
In the long run, we’re all dead: in decisions like retirement fund investments the game is short enough that Kelly takes too much risk of short-term losses and you should bet less than Kelly
Kelly doesn’t maximize expected winnings: each bet where you bet more than Kelly multiplies your EV (relative to Kelly) in exchange for a chance of falling behind Kelly
A strategy that is “bet Kelly over the infinite series of bets, except for n all-in bets to get q times Kelly EV in exchange for probability p of losing it all” may not be “essentially different” but it’s noteworthy and calls for betting more than Kelly in some bets
In an odd situation where your utility is linear or super-linear in winnings, the utility-maximizing strategy is 100% all-in bets essentially different strategy in the long run
In the long run, we’re all dead: in decisions like retirement fund investments the game is short enough that Kelly takes too much risk of short-term losses and you should bet less than Kelly
Which is one of the justifications for pension funds and annuities: by having a much longer timespan than any one retiree, they can make larger Kelly bets, see larger returns on investment, with benefits to either the retirees they are paying or the larger economy. Hanson says that this implies that eventually the economy will be dominated by Kelly players.
“the utility-maximizing strategy is 100% all-in bets”
Not quite. It’s going all-in when the expected value is greater than one, and not betting anything when it’s less. If you have a 51% chance doubling your money, go all in. If you have a 49% chance, don’t bet anything. In fact, bet negative if that’s allowed.
In order for that to be true, you have to define “in the long run” in such a way that basically begs the question.
If you define “in the long run” to mean the expected value after than many bets, the Kelly criterion is beaten by taking whatever bet has the highest expected value. For example, suppose you have a bet that has a 50% chance of losing everything and a 50% chance of quadrupling your investment, the Kelly criterion says not to take it, since losing everything has infinite disutility. If you don’t take it, your expected value is what you started with. If you take it n times, you have a 2^(-n) chance of having 4^n times as much as you started with, which gives an expected value of 2^n.
For example, suppose you have a bet that has a 50% chance of losing everything and a 50% chance of quadrupling your investment, the Kelly criterion says not to take it, since losing everything has infinite disutility.
A bet where you quadruple your investment has a b of 3, and p is .5. The Kelly criterion says you should bet (b*p-q)/b, which is (3*.5-.5)/3, which is one third of your bankroll every time. The expected value after n times is (4/3)^n.
The assumption of the Kelly criterion is that you get to decide the scale of your investment, and that the investment scales with your bankroll.
If you take it n times, you have a 2^(-n) chance of having 4^n times as much as you started with, which gives an expected value of 2^n.
Indeed, but the probability that the Kelly better does better than that better is 1-2^(-n)!
I think “in the long run” is used in the same sense as for the law of large numbers. The reason we get a different result is that the results of a bet constrain the possible choices for future bets, and it basically turns out that bets are roughly multiplicative in nature, hence why you want to maximize something like log(x) (because if x is multiplicative, log(x) would be additive and law of large numbers applies; that’s not a proof but it’s intuition).
It’s apparently not just for logarithmic utility functions. From the wikipedia page:
Right, over an infinite series of bets the probability that Kelly goes ahead of a different fixed allocation goes to 1. Some caveats:
In the long run, we’re all dead: in decisions like retirement fund investments the game is short enough that Kelly takes too much risk of short-term losses and you should bet less than Kelly
Kelly doesn’t maximize expected winnings: each bet where you bet more than Kelly multiplies your EV (relative to Kelly) in exchange for a chance of falling behind Kelly
A strategy that is “bet Kelly over the infinite series of bets, except for n all-in bets to get q times Kelly EV in exchange for probability p of losing it all” may not be “essentially different” but it’s noteworthy and calls for betting more than Kelly in some bets
In an odd situation where your utility is linear or super-linear in winnings, the utility-maximizing strategy is 100% all-in bets essentially different strategy in the long run
Which is one of the justifications for pension funds and annuities: by having a much longer timespan than any one retiree, they can make larger Kelly bets, see larger returns on investment, with benefits to either the retirees they are paying or the larger economy. Hanson says that this implies that eventually the economy will be dominated by Kelly players.
“the utility-maximizing strategy is 100% all-in bets”
Not quite. It’s going all-in when the expected value is greater than one, and not betting anything when it’s less. If you have a 51% chance doubling your money, go all in. If you have a 49% chance, don’t bet anything. In fact, bet negative if that’s allowed.
Right, and Kelly allocation is 0 for negative EV bets.
Carl, thanks, this is great!
In order for that to be true, you have to define “in the long run” in such a way that basically begs the question.
If you define “in the long run” to mean the expected value after than many bets, the Kelly criterion is beaten by taking whatever bet has the highest expected value. For example, suppose you have a bet that has a 50% chance of losing everything and a 50% chance of quadrupling your investment, the Kelly criterion says not to take it, since losing everything has infinite disutility. If you don’t take it, your expected value is what you started with. If you take it n times, you have a 2^(-n) chance of having 4^n times as much as you started with, which gives an expected value of 2^n.
A bet where you quadruple your investment has a b of 3, and p is .5. The Kelly criterion says you should bet (b*p-q)/b, which is (3*.5-.5)/3, which is one third of your bankroll every time. The expected value after n times is (4/3)^n.
The assumption of the Kelly criterion is that you get to decide the scale of your investment, and that the investment scales with your bankroll.
Indeed, but the probability that the Kelly better does better than that better is 1-2^(-n)!
I think “in the long run” is used in the same sense as for the law of large numbers. The reason we get a different result is that the results of a bet constrain the possible choices for future bets, and it basically turns out that bets are roughly multiplicative in nature, hence why you want to maximize something like log(x) (because if x is multiplicative, log(x) would be additive and law of large numbers applies; that’s not a proof but it’s intuition).