Zut Allais!

Huh! I was not expecting that response. Looks like I ran into an inferential distance.

It probably helps in interpreting the Allais Paradox to have absorbed more of the gestalt of the field of heuristics and biases, such as:

  • Experimental subjects tend to defend incoherent preferences even when they’re really silly.

  • People put very high values on small shifts in probability away from 0 or 1 (the certainty effect).

Let’s start with the issue of incoherent preferences—preference reversals, dynamic inconsistency, money pumps, that sort of thing.

Anyone who knows a little prospect theory will have no trouble constructing cases where people say they would prefer to play gamble A rather than gamble B; but when you ask them to price the gambles they put a higher value on gamble B than gamble A. There are different perceptual features that become salient when you ask “Which do you prefer?” in a direct comparison, and “How much would you pay?” with a single item.

My books are packed up for the move, but from what I remember, this should typically generate a preference reversal:

  1. 13 to win $18 and 23 to lose $1.50

  2. 1920 to win $4 and 120 to lose $0.25

Most people will (IIRC) rather play 2 than 1. But if you ask them to price the bets separately—ask for a price at which they would be indifferent between having that amount of money, and having a chance to play the gamble—people will (IIRC) put a higher price on 1 than on 2. If I’m wrong about this exact example, nonetheless, there are plenty of cases where such a pattern is exhibited experimentally.

So first you sell them a chance to play bet 1, at their stated price. Then you offer to trade bet 1 for bet 2. Then you buy bet 2 back from them, at their stated price. Then you do it again. Hence the phrase, “money pump”.

Or to paraphrase Steve Omohundro: If you would rather be in Oakland than San Francisco, and you would rather be in San Jose than Oakland, and you would rather be in San Francisco than San Jose, you’re going to spend an awful lot of money on taxi rides.

Amazingly, people defend these preference patterns. Some subjects abandon them after the money-pump effect is pointed out—revise their price or revise their preference—but some subjects defend them.

On one occasion, gamblers in Las Vegas played these kinds of bets for real money, using a roulette wheel. And afterward, one of the researchers tried to explain the problem with the incoherence between their pricing and their choices. From the transcript:

Experimenter: Well, how about the bid for Bet A? Do you have any further feelings about it now that you know you are choosing one but bidding more for the other one?
Subject: It’s kind of strange, but no, I don’t have any feelings at all whatsoever really about it. It’s just one of those things. It shows my reasoning process isn’t so good, but, other than that, I… no qualms.
...
E: Can I persuade you that it is an irrational pattern?
S: No, I don’t think you probably could, but you could try.
...
E: Well, now let me suggest what has been called a money-pump game and try this out on you and see how you like it. If you think Bet A is worth 550 points [points were converted to dollars after the game, though not on a one-to-one basis] then you ought to be willing to give me 550 points if I give you the bet...
...
E: So you have Bet A, and I say, “Oh, you’d rather have Bet B wouldn’t you?”
...
S: I’m losing money.
E: I’ll buy Bet B from you. I’ll be generous; I’ll pay you more than 400 points. I’ll pay you 401 points. Are you willing to sell me Bet B for 401 points?
S: Well, certainly.
...
E: I’m now ahead 149 points.
S: That’s good reasoning on my part. (laughs) How many times are we going to go through this?
...
E: Well, I think I’ve pushed you as far as I know how to push you short of actually insulting you.
S: That’s right.

You want to scream, “Just give up already! Intuition isn’t always right!”

And then there’s the business of the strange value that people attach to certainty. Again, I don’t have my books, but I believe that one experiment showed that a shift from 100% probability to 99% probability weighed larger in people’s minds than a shift from 80% probability to 20% probability.

The problem with attaching a huge extra value to certainty is that one time’s certainty is another time’s probability.

Yesterday I talked about the Allais Paradox:

  • 1A. $24,000, with certainty.

  • 1B. 3334 chance of winning $27,000, and 134 chance of winning nothing.

  • 2A. 34% chance of winning $24,000, and 66% chance of winning nothing.

  • 2B. 33% chance of winning $27,000, and 67% chance of winning nothing.

The naive preference pattern on the Allais Paradox is 1A > 1B and 2B > 2A. Then you will pay me to throw a switch from A to B because you’d rather have a 33% chance of winning $27,000 than a 34% chance of winning $24,000. Then a die roll eliminates a chunk of the probability mass. In both cases you had at least a 66% chance of winning nothing. This die roll eliminates that 66%. So now option B is a 3334 chance of winning $27,000, but option A is a certainty of winning $24,000. Oh, glorious certainty! So you pay me to throw the switch back from B to A.

Now, if I’ve told you in advance that I’m going to do all that, do you really want to pay me to throw the switch, and then pay me to throw it back? Or would you prefer to reconsider?

Whenever you try to price a probability shift from 24% to 23% as being less important than a shift from ~1 to 99% - every time you try to make an increment of probability have more value when it’s near an end of the scale—you open yourself up to this kind of exploitation. I can always set up a chain of events that eliminates the probability mass, a bit at a time, until you’re left with “certainty” that flips your preferences. One time’s certainty is another time’s uncertainty, and if you insist on treating the distance from ~1 to 0.99 as special, I can cause you to invert your preferences over time and pump some money out of you.

Can I persuade you, perhaps, that this is an irrational pattern?

Surely, if you’ve been reading this blog for a while, you realize that you—the very system and process that reads these very words—are a flawed piece of machinery. Your intuitions are not giving you direct, veridical information about good choices. If you don’t believe that, there are some gambling games I’d like to play with you.

There are various other games you can also play with certainty effects. For example, if you offer someone a certainty of $400, or an 80% probability of $500 and a 20% probability of $300, they’ll usually take the $400. But if you ask people to imagine themselves $500 richer, and ask if they would prefer a certain loss of $100 or a 20% chance of losing $200, they’ll usually take the chance of losing $200. Same probability distribution over outcomes, different descriptions, different choices.

Yes, Virginia, you really should try to multiply the utility of outcomes by their probability. You really should. Don’t be embarrassed to use clean math.

In the Allais paradox, figure out whether 1 unit of the difference between getting $24,000 and getting nothing, outweighs 33 units of the difference between getting $24,000 and $27,000. If it does, prefer 1A to 1B and 2A to 2B. If the 33 units outweigh the 1 unit, prefer 1B to 1A and 2B to 2A. As for calculating the utility of money, I would suggest using an approximation that assumes money is logarithmic in utility. If you’ve got plenty of money already, pick B. If $24,000 would double your existing assets, pick A. Case 2 or case 1, makes no difference. Oh, and be sure to assess the utility of total asset values—the utility of final outcome states of the world—not changes in assets, or you’ll end up inconsistent again.

A number of commenters, yesterday, claimed that the preference pattern wasn’t irrational because of “the utility of certainty”, or something like that. One commenter even wrote U(Certainty) into an expected utility equation.

Does anyone remember that whole business about expected utility and utility being of fundamentally different types? Utilities are over outcomes. They are values you attach to particular, solid states of the world. You cannot feed a probability of 1 into a utility function. It makes no sense.

And before you sniff, “Hmph… you just want the math to be neat and tidy,” remember that, in this case, the price of departing the Bayesian Way was paying someone to throw a switch and then throw it back.

But what about that solid, warm feeling of reassurance? Isn’t that a utility?

That’s being human. Humans are not expected utility maximizers. Whether you want to relax and have fun, or pay some extra money for a feeling of certainty, depends on whether you care more about satisfying your intuitions or actually achieving the goal.

If you’re gambling at Las Vegas for fun, then by all means, don’t think about the expected utility—you’re going to lose money anyway.

But what if it were 24,000 lives at stake, instead of $24,000? The certainty effect is even stronger over human lives. Will you pay one human life to throw the switch, and another to switch it back?

Tolerating preference reversals makes a mockery of claims to optimization. If you drive from San Jose to San Francisco to Oakland to San Jose, over and over again, then you may get a lot of warm fuzzy feelings out of it, but you can’t be interpreted as having a destination—as trying to go somewhere.

When you have circular preferences, you’re not steering the future—just running in circles. If you enjoy running for its own sake, then fine. But if you have a goal—something you’re trying to actually accomplish—a preference reversal reveals a big problem. At least one of the choices you’re making must not be working to actually optimize the future in any coherent sense.

If what you care about is the warm fuzzy feeling of certainty, then fine. If someone’s life is at stake, then you had best realize that your intuitions are a greasy lens through which to see the world. Your feelings are not providing you with direct, veridical information about strategic consequences—it feels that way, but they’re not. Warm fuzzies can lead you far astray.

There are mathematical laws governing efficient strategies for steering the future. When something truly important is at stake—something more important than your feelings of happiness about the decision—then you should care about the math, if you truly care at all.