Meetup Notes: Ole Peters on ergodicity

Ole Peters claims that the standard expected utility toolbox for evaluating wagers is a flawed basis for rational decisionmaking. In particular, it commonly fails to take into account that an investor/​bettor taking a series of repeated bets is not an ergodic process.

Optimization Process, internety, myself, and a couple others spent about 5 hours across a couple of Seattle meetups investigating what Peters was saying.

Background

Why do we care?

Proximally, because Nassim Taleb is bananas about ergodicity.

More interestingly, expected utility maximization is widely accepted as the basis for rational decisionmaking. Finding flaws (or at least pathologies) in this foundation is therefore quite high leverage.

A specific example: many people’s retirement investment strategies might be said to be taking the “ensemble average” as their optimization target—i.e. their portfolios are built on the assumption that, every year, an individual investor should make the choice that, when averaged across (e.g.) 100,000 investors making that choice for that year, will maximize the mean wealth (or mean utility) of investors in the group at the end of that year. It’s claimed that this means that individual retirement plans can’t work because many individuals will, in actuality, eventually be impoverished by market swings, and that social insurance schemes (e.g. Social Security) where the current rich are transferring wealth to the current poor avoid this pitfall.

Claims about shortcomings in expected utility maximization are also interesting because I’ve felt vaguely confused for a long time about why expected value/​utility is the right way to evaluate decisions; it seems like I might be more strongly interested in something like “the 99th percentile outcome for the overall utility generated over my lifetime”. Any work that promises to pick at the corners of EU maximization is worth looking at.

What does existing non-Peters theory say?

The Von Neumann-Morgenstern theorem says, loosely, that all rational actors are maximizing some utility function in expectation. It’s almost certainly not the case that Ole Peters has produced a counterexample, but (again) identifying apparently pathological behavior implied by the VNM math would be quite useful.

Economics research as a whole tends to take it as given that individual actors are trying to maximize, in expectation, the logarithm of their wealth (or some similar risk-averse function mapping wealth to utility).

Specific claims made by Peters et al.

We were pretty confused about this and spent a bunch of investigation time simply nailing down what was being claimed!

What we learned

1.5x/​0.6x coin flip bet

This is a specific example from https://​​medium.com/​​fresheconomicthinking/​​revisiting-the-mathematics-of-economic-expectations-66bc9ad8f605

Here’s what we concluded. [These tags explain the level of proof we used.]

  • It is indeed the case that playing many, many rounds of this bet compresses almost all the winnings into a tiny corner of probability space, with “lost a bunch of money” being the overwhelming majority of outcomes. [math proof]

  • However, no log-wealth-maximizer would accept the bet, ever (at least, not at the stated “bet entire bankroll every time” stakes). [math proof]

  • Betting only a tiny, constant chunk of your bankroll every time instead of all your money at once does, as expected, make you richer most of the time. [Monte Carlo simulation, intuition]

  • Reasoning about what happens over a gazillion rounds of the game is a little bunk because you don’t have to commit to play a zillion rounds up front. [hand-waving math intuition]

    • i.e. if someone is choosing, every round, whether or not to keep playing the game, pointing out that (their decision in round N to keep playing is dumb because it would be a terrible idea to commit to play a gazillion ( >> N ) rounds up front) is a red herring.

“Rich house, poor player” theorems

The “coin flip” example of the previous section is claimed to be interesting because most players go bankrupt, despite every wager offered being positive expected value to the player.

So then an interesting question arises: can some rich “house” exploit some less-rich “player” player by offering a positive-expected-value wager that the player will always choose to accept, but that leads with near certainty to the player’s bankruptcy when played indefinitely?

(As noted in the last section, no log-wealth-utility player would take even the first bet, so we chose to steelman/​simplify by assuming that wealth == utility (either adjusting the gamble so that it is positive expected utility, or adjusting the player to have utility linear in wealth))

We think it’s pretty obvious that, if the house can fund wagers whose player-utility is unbounded (either the house has infinity money, or the player has some convenient utility function), then, yes, the house can almost surely bankrupt the player.

So, instead, consider a house that has some finite amount of money. We have a half-baked math proof ([1] [2]) that there can’t exist a way for the house to almost-surely (defined as “drive the probability of bankruptcy to above (1 - epsilon) for any given epsilon”) bankrupt the player.

Tangentially: there’s a symmetry issue here: you can just as well say “the house will eventually go bankrupt” if the house will be repeatedly playing some game with unbounded max payoff with many players. However, note that zero-sum games that neither party deems wise to play are not unheard of; risk-averse agents don’t want to play any zero-sum games at fair odds!

Paper: The time resolution of the St Petersburg Paradox

This paper claims to apply Peters’s time-average (instead of ensemble-average) methods to resolve the St. Petersburg Paradox, and to derive “utility logarithmic in wealth” as a straightforward implication of the time-average reasoning he uses.

We spent about an hour trying to digest this. Unfortunately, academic math papers are often impenetrable even when they’re making correct statements using mathematical tools the reader is familiar with, so we’re not sure of our conclusions.

That said here are some loose notes pointing to particular steps we either couldn’t verify the validity of or think are invalid.

Optimization Process also pointed out that equation (6.6) doesn’t really make sense for a lottery where the payout is always zero.

This paper works from the assumption that the player is trying to maximize (in expectation) the exponential growth rate of their wealth. We noticed that this is the log-wealth-maximizer—i.e. in order to to get from “maximizes growth” to “maximizes the logarithm of wealth”, you don’t seem to actually need whatever derivation Peters’s paper is making.

Conclusions

We still don’t understand what “the problem with expected utility” is that Peters is pointing at. It seems like expected utility with a risk-averse utility function is sufficient to make appropriate choices in the 1.5x/​0.6x flip and St. Petersburg gambles.

Peters’s time-average vs. ensemble-average St. Petersburg paper either has broken math, or we don’t understand it. Either way, we’re still confused about the time- vs. ensemble-average distinction’s application to gambles.

Peters’s St. Petersburg Paradox paper does derive something equivalent to log-wealth-utility from maximizing expected growth rate, but maybe this is an elaborate exercise in begging the question by assuming “maximize expected growth rate” as the goal.

I, personally, am unimpressed by Peters’s claims, and I don’t intend to spend more brainpower investigating them.