# Kelly and the Beast

## Poor models

Consider two alternative explanations to each of the following questions:

Why do some birds have brightly-colored feathers? Because (a) evolution has found that they are better able to attract mates with such feathers or (b) that’s just how it is.

Why do some moths, after a few generations, change color to that of surrounding man-made structures? Because (a) evolution has found that the change in color helps the moths hide from predators or (b) that’s just how it is.

Why do some cells communicate primarily via mechanical impulses rather than electrochemical impulses? Because: (a) for such cells, evolution has found a trade-off between energy consumption, information transfer, and information processing such that mechanical impulses are preferred or (b) that’s just how it is.

Why do some cells communicate primarily via electrochemical impulses rather than mechanical impulses? Because: (a) for such cells, evolution has found a trade-off between energy consumption, information transfer, and information processing such that electrochemical impulses are preferred or (b) that’s just how it is.

Clearly the first set of explanations are better, but I’d like to say a few things in defense of the second.

The preference of evolution towards one against the other could very likely have nothing to do with mating, predators, energy consumption, information transfer, or information processing. Those are the best theorized guesses we have, and they have no experimental backing.

Evolution works as a justification for contradictory phenomena.

The second set of explanations are simpler.

The second set of explanations have perfect sensitivity, specificity, precision, etc.

If that’s not enough to convince you, then I propose as a middle-ground another alternative explanation for any situation where evolution alone might be used as such: “I don’t have a clue.” It’s more honest, more informative, and it does more to get people to actually investigate open questions, as opposed to pretending those questions have been addressed in any meaningful way.

## Less poor models

When people use evolution as a justification for a phenomenon, what they tend to imagine is this:

Changes are gradual.

Changes occur on the time scale of generations, not individuals.

Duplication and termination are, respectively, positively and negatively correlated with the changing thing.

If you agree, then I’m sure the following questions regarding standard evopsych explanations of social signaling phenomenon X should be easy to answer:

What indication is there that the change in adoption of X was gradual?

What indication is there that change in adoption of X happens on the time scale of generations and not individuals (i.e., that individuals have little influence in their own local adoption of X)?

What constitutes duplication and termination? Is the hypothesized chain of correlation short enough or reliable enough to be convincing?

If you agreed with the decomposition of “evolution” and disagreed with any of the subsequent questions, then your model of evolution might not be consistent, or you may have a preference for unjustified explanations. In conversation, this isn’t really an issue, but perhaps there are some downsides to using inconsistent models for your personal worldviews.

## Optimal models

In March 1956, John Kelly described an equation for betting optimally on a coin-toss game weighted in your favor. If you were expected to gain money on average, and if you could place bets repeatedly, then the Kelly bet let you grow your principal at the greatest possible rate.

You can read the paper here: http://www.herrold.com/brokerage/kelly.pdf. It’s important for you to be able to read papers like this…

The argument goes like this. Given a coin that lands on heads with probability p and tails with probability q=1-p, I let you bet k (a fraction of your total) that the coin will land heads. If you win, I give you b*k. If you lose, I take your k. After n rounds, you will have won on average p*n times, and you will have lost q*n times. Your new total will look like this:

Your bet is optimized when the gradient of this value with respect to k is zero and decreasing in both directions away.

You can check easily that the equation is always concave down when the odds are in your favor and when k is between 0 and 1. Note that there is alway one local maximum: the value of k found above. There is also one undefined value, k=1 (all-in every time), which if you plug into the original equation results in you being broke.

The Kelly bet makes one key assumption: chance is neither with you nor against you. If you play n games, you will win n*p of them, and you will lose n*q. With this assumption, which often aligns closely with reality, your principal will grow fairly reliably, and it will grow exponentially. Moreover, you will never go broke with a Kelly bet.

There is a second answer though that doesn’t make this assumption: go all-in every time. Your expected winnings, summed over all possible coin configurations, will be:

If you run the numbers, you’ll see that this second strategy often beats the Kelly bet on average, though most outcomes result in you being broke.

So I’ll offer you a choice. We’ll play the coin game with a fair coin. You get 3U (utility) for every 1U you bet if you win, and you lose your 1U otherwise. You can play the game with any amount of utility for up to, say, a trillion rounds. Would you use Kelly’s strategy, with which your utility would almost certainly grow exponentially to be far larger than your initial sum, or would you use the second strategy, which performs far, far better on average, though through which you’ll almost certainly end up turning the world into a permanent hell?

This assumes nothing about the utility function other than that utility can reliably be increased and decreased by specific quantities. If you prefer the Kelly bet, then you’re not optimizing for any utility function on average, and so you’re not optimizing for any utility function at all.

I choose the second strategy, of course. The Kelly strategy in your problem is risk-averse w.r.t. utility, which is irrational because utility is pretty much defined as the measure you aren’t risk-averse about. That said, it’s very hard to imagine a situation that would correspond to high utility in human decision-making (i.e. you’d happily agree to 1% chance of that situation and 99% chance of extinction), so I don’t blame people for feeling that risk-aversion must be the answer.

Edit: it’s better to think of utility not as the amount of goodness in a situation (happy people etc), but as a way to summarize your current decision-making under uncertainty. For example, you don’t have to assign utility 20000 to any situation, unless you’d truly prefer a 1% chance of that situation to 100% chance of utility 100. That makes it clear just how unintuitive high utilities really are.

Would your answer change if I let you flip the coin until you lost? Based on your reasoning, it should not. Despite it being an effectively-guaranteed extinction, the infinitesimal chance is overwhelmed by the gains in the case of infinitely many good coin flips.

I would not call the Kelly strategy risk-averse. I imagine that word to mean “grounded in a fantasy where risk is exaggerated”. I would call the second strategy risk-prone. The difference is that the Kelly strategy ends up being the better choice in realistic cases, whereas the second strategy ends up being the better choice in the extraordinarily rare wishful cases. In that sense, I see this question as one that differentiates people that prefer to make decisions grounded in reality from those that prefer to make decisions grounded in wishful thinking. The utilitarian approach then is prone to wishful thinking.

Still, I get your point. There may exist a low-chance scenario for which I would, with near certainty, trade the Kelly-heaven world for a second-hell world. To me, that means there exists a scenario that could lull me into gambling on wildly-improbable wishful thinking. Though such scenarios may exist, and though I may bet on such scenarios when presented with them, I don’t believe it’s reasonable to bet on them. I can’t tell if you literally believe that it’s reasonable to bet on such scenarios or if you’re imagining something wholly different from me.

Yes, my answer would change to “I don’t know”. vNM expected utility theory certainly doesn’t apply when some strategy’s expected utility isn’t a real number. I don’t know any other theory that applies, either. You might appeal to pre-theoretic intuition, but it’s famously unreliable when talking about infinities.

The rest of your comment seems confused. Let’s say the “reasonable” part of you is a decision-making agent named Bob. If Bob wouldn’t bet the house on any low probability scenario, that means Bob doesn’t assign high utility to anything (because assigning utility is just a way to encode Bob’s decisions), so the thought experiment is impossible for Bob to begin with. That’s fine, but then it doesn’t make sense to say that Bob would choose the Kelly strategy.

It can make sense to say that a utility function is bounded, but that implies certain other restrictions. For example, bounded utility functions cannot be decomposed into independent (additive or multiplicative, these are the only two options) subcomponents if the number of subcomponents is unknown. Any utility function that is summed or multiplied over an unknown number of independent (e.g.) societies must be unbounded*. Does that mean you believe that utility functions can’t be aggregated over independent societies or that no two societies can contribute independently to the utility function? That latter implies that a utility function cannot be determined without knowing about all societies, which would make the concept useless. Do you believe that utility functions can be aggregated at all beyond the individual level?

Keep in mind that “unbounded” here means “arbitrarily additive”. In the multiplicative case, even if a utility function is always less than 1, if an individual’s utility can be made arbitrarily close to 0, then it’s still unbounded. Such an individual still has enough to gain by betting on a trillion coin tosses.

You mentioned that a utility function should be seen as a proxy to decision making. If decisions can be independent, then their contributions to the definition of a utility function must be independent*. If the utility function is bounded, then the number of independent decisions something can decide between must also be bounded. Maybe that makes sense for individuals since you distinguished a utility function as a summary of “current” decision-making, and any individual is presumably limited in their ability to decide between independent outcomes at any given point in time. Again, though, this causes problems for aggregate utility functions.

Consider the functor F that takes any set of decisions (with inclusion maps between them) to the least-assuming utility function consistent with them. There exists a functor G that takes any utility function to the maximal set of decisions derivable from it. F,G together form a contravariant adjunction between set of decisions and utility functions. F is then left-adjoint to G. Therefore F preserves finite coproducts as finite products. Therefore for any disjoint union of decisions A,B, the least-assuming utility function defined over them exists and is F(A+B)=F(A)*F(B). The proof is nearly identical for covariant adjunctions.

It seems like nonsense to say that utility functions can’t be aggregated. A model of arbitrary decision making shouldn’t suddenly become impossible just because you’re trying to model, say, three individuals rather than one. The aggregate has preferential decision making just like the individual.

I don’t know if my utility function is bounded. My statement was much weaker, that I’m not confident about decision-making in situations involving infinities. You’re right that the problem happens not just for unbounded utilities, but also for arbitrarily fine distinctions between utilities. None of these seem to apply to your original post though, where everything is finite and I can be pretty damn confident.

Algebraic reasoning is independent of the number system used. If you are reasoning about utility functions in the abstract and if your reasoning does not make use of any properties of numbers, then it doesn’t matter what numbers you use. You’re not using any properties of finite numbers to define anything, so the fact of whether or not these numbers are finite is irrelevant.

The original post doesn’t require arbitrarily fine distinctions, just 2^trillion distinctions. That’s perfectly finite.

Your comment about Bob not assigning a high utility value to anything is equivalent to a comment stating that Bob’s utility function is bounded.

Right, but Bob was based on your claims in this comment about what’s “reasonable” for you. I didn’t claim to agree with Bob.

Fair enough. I have a question then. Do you personally agree with Bob?

You’re asking if my utility function is bounded, right? I don’t know. All the intuitions seem unreliable. My original confident answer to you (“second strategy of course”) was from the perspective of an agent for whom your thought experiment is possible, which means it necessarily disagrees with Bob. Didn’t want to make any stronger claim than that.

I am, and thanks for answering. Keep in mind that there are ways to make your intuition more reliable, if that’s a thing you want.

It can make sense to say that a utility function is bounded, but that implies certain other restrictions. For example, bounded utility functions cannot be decomposed into independent (additive) subcomponents if the number of subcomponents is unknown. Any utility function that is summed over an unknown number of independent (e.g.) societies must be unbounded. Does that mean you believe that utility functions can’t be aggregated over independent societies or that no two societies can contribute independently to the utility function? That latter implies that a utility function cannot be determined without knowing about all societies, which would make the concept useless. Do you believe that utility functions can be aggregated at all beyond the individual level?

You mentioned that a utility function should be seen as a proxy to decision making. If decisions can be independent, then their contributions to the definition of a utility function must be independent*. If the utility function is bounded, then the number of independent decisions something can decide between must also be bounded. Maybe that makes sense for individuals since you distinguished a utility function as a summary of “current” decision-making, and any individual is presumably limited in their ability to decide between independent outcomes at any given point in time. Again, though, this causes problems for aggregate utility functions.

Consider the functor F that takes any set of decisions (with inclusion maps between them) to the least-assuming utility function consistent with them. There exists a functor G that takes any utility function to the maximal set of decisions derivable from it. F,G together form a contravariant adjunction between set of decisions and utility functions. F is then left-adjoint to G. Therefore G preserves finite coproducts as finite products. Therefore for any independent sets of decisions A,B and their union A+B, the least-assuming utility function defined over them exists and is F(A+B)=F(A)*F(B).

It seems like nonsense to say that utility functions can’t be aggregated. A model of arbitrary decision making shouldn’t suddenly become impossible just because you’re trying to model, say, three individuals rather than one. The aggregate has preferential decision making just like the individual.

This is right, and proves conclusively that all humans have bounded utility, because no human would accept any bet with e.g. 1 in Graham’s number odds of success, or if they did, it would not be for the sake of that utility, but for the sake of something else like proving to people that they have consistent principles.

“Proves conclusively” is a bit too strong. The conclusion relies on human intuitions about large numbers, and intuitions about what’s imaginable and what isn’t, both of which seem unreliable to me. I think it’s possible (>1%) that the utility function of reasonably defined CEV will be unbounded.

Agreed. Utility is a flow, not a stock—it doesn’t carryover from decision to decision, so you can’t “lose” utility, you just find yourself in a state that is lower utility than the alternate you were considering. And there’s no reason it can’t be negative (though there’s no reason for it to be—it can safely be normalized to whatever range you prefer).

Either of these would make the Kelly strategy to minimize the chance of going broke and being barred from future wager irrelevant.

When talking about wagers, you really need to think in terms of potential future universe states, and a corresponding (individual, marginal) function to compare the states against each other. The result of that function is called “utility”. All it does is assign a desirability number to a state of the universe for that actor.

Attempts to treat utility as an actual resource in and of itself are just confused.

So, if you change the problem to be meaninful: say you’re wagering remaining days of life, which your utility function is linear in (at the granularity we’re discussing), Kelly is the clear strategy. You want to maximize the sum of results while minimizing the chance that you cross to zero and have to stop playing.

Dagon: You can artificially bound utility to some arbitrarily low “bankruptcy” point. The lack of a natural one isn’t relevant to the question of whether a utility function makes sense here. On treating utility as a resource, if you can make decisions to increase or decrease utility, then you can play the game. Your basic assumption seems to be that people can’t meaningfully make decisions that change utility, at which point there is no point in measuring it, as there’s nothing anyone can do about it.

The point of unintuitive high utilities and upper-bounded utilities I believe deserves another post.

Such as...?

Not for any sensible definition of the word “simpler”. They just overfit everything.

Yes, but zero prediction or compression power.

Also it’s unclear to me what the connection is between this part and the second.

Again, not informative for any sensible definition of the word.

My bad, I did a poor job explaining that. The first part is about the problems of using generic words (evolution) with fuzzy decompositions (mates, predators, etc) to come to conclusions, which can often be incorrect. The second part is about decomposing those generic words into their implied structure, and matching that structure to problems in order to get a more reliable fit.

I don’t believe that “I don’t know” is a

goodanswer, even if it’s often the correct one. People have vague intuitions regarding phenomena, and wouldn’t it be nice if they could apply those intuitions reliably? That requires a mapping from the intuition (evolution is responsible) to the problem, and the mapping can only be made reliable once the intuition has been properly decomposed into its implied structure, and even then, only if the mapping is based on the decomposition.I started off by trying to explain all of that, but realized that there is far too much when starting from scratch. Maybe someday I’ll be able to write that post...

The cell example is an example of evolution being used to justify contradictory phenomena. The exact same justification is used for two opposing conclusions. If you thought there was nothing wrong with those two examples being used as they were, then there is something wrong with your model. They

literallyuse the exact same justification to come to opposing conclusions.The second set of explanations have fewer, more reliably-determinable dependencies, and their reasoning is more generally applicable.

That is correct, they have zero prediction and compression power. I would argue that the same can be said of many cases where people misuse evolution as an explanation.

When people falsely pretend to have knowledge of some underlying structure or correlate, they are (1) lying and (2) increasing noise, which by various definition is negative information. When people use evolution as an explanation in cases where it does not align with the implications of evolution, they are doing so under a false pretense. My suggested approach (1) is honest and (2) conveys information about the lack of known underlying structure or correlate.

I don’t know what you mean by “sensible definition”. I have a model for that phrase, and yours doesn’t seem to align with mine.

Seconded.

My utility function is bounded. This means that your assumption “that utility can reliably be increased and decreased by specific quantities” is sometimes false for my function. It will depend on the details but in some cases this means that I should use the Kelly strategy even though I am optimizing for a utility function.

The fact that the function is bounded also explains cousin it’s point that “it’s very hard to imagine a situation that would correspond to high utility in human decision-making.”