Optional stopping

I offer you an opportunity to play the following game: if you agree to play, I will flip a fair coin until you tell me to stop, and for each toss of the coin if the coin comes up heads I’ll pay you 1$, if the coin comes up tails you’ll pay me 1$. Your advantage is that you can stop playing the game whenever you want. How much should you be willing to pay to play this game if you only care about the expected value of your payoff, i.e. if you have linear utility?

Let’s try to work this out. If is the expected value of the game for you, then solves because you can either stop for an expected value of 0 or toss a coin for an expected value of with probability and with probability , and we assume you make the optimal choice.

Well, that got us nowhere: this equation is solved by any . All we learned is that the expected value of the game is not negative, which is trivial since you can always just choose to stop the moment you start playing! What’s worse is that under our setup this Bellman equation seems to be the only constraint we can find on , so it seems as if we should say this game has no well-defined expected value.

The bounded game

Let’s investigate another variant of this problem to see what we can say about it. Suppose we both have finite bankrolls, respectively for you and me. The game must stop when either of us hits zero, since then we’re not guaranteed to be able to pay if we lose the next coin toss.

In this case, your expected value will be a function of both our bankrolls, and the Bellman equation will now become

Now we can prove that there is only one solution and that is everywhere. Clearly everywhere by definition, so we can remove the maximum and simply solve the equation

The nice thing about this is that it is a decoupled set of single-variable functional equations on each line for all , and on each such line the function must be zero because it’s harmonic with zero boundary conditions on both ends. To make this explicit, suppose we fix and define . Then, we’ll have

and from here it’s easy to show by induction that . However, we also know that on the other end , so this implies that and therefore on the whole line . Since was arbitrary, we deduce is zero everywhere. In other words, playing this game is worth nothing in the case both our bankrolls are finite.

The unbounded game

I want to make the case that there’s indeed something deep that we’ve stumbled over here. Suppose that we go back to the original game with our bankrolls being unbounded and you execute the following strategy: play until you win dollars and then stop, for to be determined later. In other words, play until the difference between the number of heads and the number of tails that have come up so far is equal to and then stop. What happens?

Well, let’s figure out the probability that the game stops, since it’s not clear that with this strategy the game stops at all. If is the difference between the heads and tails that have come up thus far, then the probability this strategy will eventually halt is going to solve

This is quite similar to what we had before and it’s easy to solve it in the same way. Suppose for some . Then, we can argue as we did before that for all , and since probabilities are bounded from below by the only way this can work is if and is equal to everywhere. In other words, our strategy stops with probability no matter what is.

This is the source of the problem in the unbounded case: for any we have strategies that stop almost surely (with probability equal to ) and that give us arbitrarily large amounts of profit from the game. However, since the bounded version of the game has an expected value of zero, we can deduce that there’s something fishy about these strategies: they all rely crucially on the fact that you have an infinite bankroll and can afford to lose any finite amount of money.

We might therefore ask the following question: what condition do we need a strategy to satisfy in order to guarantee that its expected value is zero? In other words, if the space of all possible strategies in the infinite version of the game includes some that aren’t “sensible”, which strategies in this collection can we say are “sensible” in some sense?

The optional stopping theorem

Let’s formalize the question we’re asking. A strategy in this game is just going to be a decision of when we stop playing and it can only depend on the information we have up until that point (plus some randomness if we want). In other words, in the th round we must make the decision to stop or not based on information we have for the first rounds. Then the time in which we stop playing itself becomes a random variable, say . Our profit from the game is

where encodes the result of the th coin flip: it’s equal to if it came up heads and if it came up tails. This sum is just cutting off the sum if we ever decide to stop playing, so the th flip only appears if is actually larger than or equal to .

If we now do a naive calculation of the expected profit, we’ll get

and now we can use the fact that we know whether or not at the end of the th round to use the law of total expectation:

since the conditional expectation of at time , which I denoted by here, is always zero.

However, something must’ve gone wrong in this argument, because we actually have a concrete example of a strategy which gives us positive expected profit. Where did the mistake creep in?

It turns out it’s the step in which we swap the expectation with the infinite sum. We can always do this if the sum is finite, of course, but if it’s infinite we have to be more careful. Since the expectation is a kind of integral and an infinite sum is the limit of its partial sums, the question is essentially about when we can swap a limit with an integral. The dominated convergence theorem controls when we can interchange these two operations: we can do it if we can bound the absolute value of the partial sums from above by something that has finite expected value.

Can we do this? Well, the best we can really do here is the triangle inequality, so we get

This is exactly what we wanted: if is finite, in other words if our strategy has a finite expected stopping time, it has an expected payoff of zero.

We have proved the following:

Optional stopping theorem, bounded increments form: If the coin flip payoffs are all bounded from above in absolute value by an absolute constant independent of , then any strategy which has a stopping time of finite expected value has expected payoff equal to zero.

In fact the statement applies to much more general objects than our simple coin flipping game: any martingale with bounded increments is going to obey this condition, even if the increments are not independently and identically distributed. In fact, satisfying some version of the optional stopping theorem turns out to be equivalent to being a martingale: this is the basis for the claim that “martingales represent fair games/​efficient markets”.

What about our strategy of stopping when ? The stopping time of this turns out to have a probability mass function and so its expected value is

as we already knew from applying the optional stopping theorem.

What to make of this?

I’ve written this post to highlight how questions about “what is a fair game?” become much trickier to answer when the games in question are infinite in some sense. Even if a game is “locally fair”, in the sense that the payoff is a martingale, it may not be “globally fair” when exploited by using certain ill-behaved strategies. The optional stopping thoerem controls how bad this problem can get.

However, on the flip side, the optional stopping theorem also gives us a guarantee that all strategies which are well behaved will behave “as they should”, in the sense that they won’t get the player using them any profit from nowhere.

To see how powerful this is, consider the following problem: if I flip a fair coin until we either get more heads than tails or more tails than heads and then stop, on average how many times do I flip the coin?

There is both an elementary solution to this problem which uses Bellman equations as we did above, and a solution using the optional stopping theorem on the martingale where is the number of heads minus the number of tails so far and is the number of turns played so far. The argument using the optional stopping theorem is both much nicer and generalizes much more readily to other setups than this simple coin flipping game.