Zut Allais!

Huh! I was not ex­pect­ing that re­sponse. Looks like I ran into an in­fer­en­tial dis­tance.

It prob­a­bly helps in in­ter­pret­ing the Allais Para­dox to have ab­sorbed more of the gestalt of the field of heuris­tics and bi­ases, such as:

  • Ex­per­i­men­tal sub­jects tend to defend in­co­her­ent prefer­ences even when they’re re­ally silly.

  • Peo­ple put very high val­ues on small shifts in prob­a­bil­ity away from 0 or 1 (the cer­tainty effect).

Let’s start with the is­sue of in­co­her­ent prefer­ences—prefer­ence re­ver­sals, dy­namic in­con­sis­tency, money pumps, that sort of thing.

Any­one who knows a lit­tle prospect the­ory will have no trou­ble con­struct­ing cases where peo­ple say they would pre­fer to play gam­ble A rather than gam­ble B; but when you ask them to price the gam­bles they put a higher value on gam­ble B than gam­ble A. There are differ­ent per­cep­tual fea­tures that be­come salient when you ask “Which do you pre­fer?” in a di­rect com­par­i­son, and “How much would you pay?” with a sin­gle item.

My books are packed up for the move, but from what I re­mem­ber, this should typ­i­cally gen­er­ate a prefer­ence re­ver­sal:

  1. 13 to win $18 and 23 to lose $1.50

  2. 1920 to win $4 and 120 to lose $0.25

Most peo­ple will (IIRC) rather play 2 than 1. But if you ask them to price the bets sep­a­rately—ask for a price at which they would be in­differ­ent be­tween hav­ing that amount of money, and hav­ing a chance to play the gam­ble—peo­ple will (IIRC) put a higher price on 1 than on 2. If I’m wrong about this ex­act ex­am­ple, nonethe­less, there are plenty of cases where such a pat­tern is ex­hibited ex­per­i­men­tally.

So first you sell them a chance to play bet 1, at their stated price. Then you offer to trade bet 1 for bet 2. Then you buy bet 2 back from them, at their stated price. Then you do it again. Hence the phrase, “money pump”.

Or to para­phrase Steve Omo­hun­dro: If you would rather be in Oak­land than San Fran­cisco, and you would rather be in San Jose than Oak­land, and you would rather be in San Fran­cisco than San Jose, you’re go­ing to spend an awful lot of money on taxi rides.

Amaz­ingly, peo­ple defend these prefer­ence pat­terns. Some sub­jects aban­don them af­ter the money-pump effect is pointed out—re­vise their price or re­vise their prefer­ence—but some sub­jects defend them.

On one oc­ca­sion, gam­blers in Las Ve­gas played these kinds of bets for real money, us­ing a roulette wheel. And af­ter­ward, one of the re­searchers tried to ex­plain the prob­lem with the in­co­her­ence be­tween their pric­ing and their choices. From the tran­script:

Ex­per­i­menter: Well, how about the bid for Bet A? Do you have any fur­ther feel­ings about it now that you know you are choos­ing one but bid­ding more for the other one?
Sub­ject: It’s kind of strange, but no, I don’t have any feel­ings at all what­so­ever re­ally about it. It’s just one of those things. It shows my rea­son­ing pro­cess isn’t so good, but, other than that, I… no qualms.
E: Can I per­suade you that it is an ir­ra­tional pat­tern?
S: No, I don’t think you prob­a­bly could, but you could try.
E: Well, now let me sug­gest what has been called a money-pump game and try this out on you and see how you like it. If you think Bet A is worth 550 points [points were con­verted to dol­lars af­ter the game, though not on a one-to-one ba­sis] then you ought to be will­ing to give me 550 points if I give you the bet...
E: So you have Bet A, and I say, “Oh, you’d rather have Bet B wouldn’t you?”
S: I’m los­ing money.
E: I’ll buy Bet B from you. I’ll be gen­er­ous; I’ll pay you more than 400 points. I’ll pay you 401 points. Are you will­ing to sell me Bet B for 401 points?
S: Well, cer­tainly.
E: I’m now ahead 149 points.
S: That’s good rea­son­ing on my part. (laughs) How many times are we go­ing to go through this?
E: Well, I think I’ve pushed you as far as I know how to push you short of ac­tu­ally in­sult­ing you.
S: That’s right.

You want to scream, “Just give up already! In­tu­ition isn’t always right!”

And then there’s the busi­ness of the strange value that peo­ple at­tach to cer­tainty. Again, I don’t have my books, but I be­lieve that one ex­per­i­ment showed that a shift from 100% prob­a­bil­ity to 99% prob­a­bil­ity weighed larger in peo­ple’s minds than a shift from 80% prob­a­bil­ity to 20% prob­a­bil­ity.

The prob­lem with at­tach­ing a huge ex­tra value to cer­tainty is that one time’s cer­tainty is an­other time’s prob­a­bil­ity.

Yes­ter­day I talked about the Allais Para­dox:

  • 1A. $24,000, with cer­tainty.

  • 1B. 3334 chance of win­ning $27,000, and 134 chance of win­ning noth­ing.

  • 2A. 34% chance of win­ning $24,000, and 66% chance of win­ning noth­ing.

  • 2B. 33% chance of win­ning $27,000, and 67% chance of win­ning noth­ing.

The naive prefer­ence pat­tern on the Allais Para­dox is 1A > 1B and 2B > 2A. Then you will pay me to throw a switch from A to B be­cause you’d rather have a 33% chance of win­ning $27,000 than a 34% chance of win­ning $24,000. Then a die roll elimi­nates a chunk of the prob­a­bil­ity mass. In both cases you had at least a 66% chance of win­ning noth­ing. This die roll elimi­nates that 66%. So now op­tion B is a 3334 chance of win­ning $27,000, but op­tion A is a cer­tainty of win­ning $24,000. Oh, glo­ri­ous cer­tainty! So you pay me to throw the switch back from B to A.

Now, if I’ve told you in ad­vance that I’m go­ing to do all that, do you re­ally want to pay me to throw the switch, and then pay me to throw it back? Or would you pre­fer to re­con­sider?

When­ever you try to price a prob­a­bil­ity shift from 24% to 23% as be­ing less im­por­tant than a shift from ~1 to 99% - ev­ery time you try to make an in­cre­ment of prob­a­bil­ity have more value when it’s near an end of the scale—you open your­self up to this kind of ex­ploita­tion. I can always set up a chain of events that elimi­nates the prob­a­bil­ity mass, a bit at a time, un­til you’re left with “cer­tainty” that flips your prefer­ences. One time’s cer­tainty is an­other time’s un­cer­tainty, and if you in­sist on treat­ing the dis­tance from ~1 to 0.99 as spe­cial, I can cause you to in­vert your prefer­ences over time and pump some money out of you.

Can I per­suade you, per­haps, that this is an ir­ra­tional pat­tern?

Surely, if you’ve been read­ing this blog for a while, you re­al­ize that you—the very sys­tem and pro­cess that reads these very words—are a flawed piece of ma­chin­ery. Your in­tu­itions are not giv­ing you di­rect, veridi­cal in­for­ma­tion about good choices. If you don’t be­lieve that, there are some gam­bling games I’d like to play with you.

There are var­i­ous other games you can also play with cer­tainty effects. For ex­am­ple, if you offer some­one a cer­tainty of $400, or an 80% prob­a­bil­ity of $500 and a 20% prob­a­bil­ity of $300, they’ll usu­ally take the $400. But if you ask peo­ple to imag­ine them­selves $500 richer, and ask if they would pre­fer a cer­tain loss of $100 or a 20% chance of los­ing $200, they’ll usu­ally take the chance of los­ing $200. Same prob­a­bil­ity dis­tri­bu­tion over out­comes, differ­ent de­scrip­tions, differ­ent choices.

Yes, Virginia, you re­ally should try to mul­ti­ply the util­ity of out­comes by their prob­a­bil­ity. You re­ally should. Don’t be em­bar­rassed to use clean math.

In the Allais para­dox, figure out whether 1 unit of the differ­ence be­tween get­ting $24,000 and get­ting noth­ing, out­weighs 33 units of the differ­ence be­tween get­ting $24,000 and $27,000. If it does, pre­fer 1A to 1B and 2A to 2B. If the 33 units out­weigh the 1 unit, pre­fer 1B to 1A and 2B to 2A. As for calcu­lat­ing the util­ity of money, I would sug­gest us­ing an ap­prox­i­ma­tion that as­sumes money is log­a­r­ith­mic in util­ity. If you’ve got plenty of money already, pick B. If $24,000 would dou­ble your ex­ist­ing as­sets, pick A. Case 2 or case 1, makes no differ­ence. Oh, and be sure to as­sess the util­ity of to­tal as­set val­ues—the util­ity of fi­nal out­come states of the world—not changes in as­sets, or you’ll end up in­con­sis­tent again.

A num­ber of com­menters, yes­ter­day, claimed that the prefer­ence pat­tern wasn’t ir­ra­tional be­cause of “the util­ity of cer­tainty”, or some­thing like that. One com­menter even wrote U(Cer­tainty) into an ex­pected util­ity equa­tion.

Does any­one re­mem­ber that whole busi­ness about ex­pected util­ity and util­ity be­ing of fun­da­men­tally differ­ent types? Utilities are over out­comes. They are val­ues you at­tach to par­tic­u­lar, solid states of the world. You can­not feed a prob­a­bil­ity of 1 into a util­ity func­tion. It makes no sense.

And be­fore you sniff, “Hmph… you just want the math to be neat and tidy,” re­mem­ber that, in this case, the price of de­part­ing the Bayesian Way was pay­ing some­one to throw a switch and then throw it back.

But what about that solid, warm feel­ing of re­as­surance? Isn’t that a util­ity?

That’s be­ing hu­man. Hu­mans are not ex­pected util­ity max­i­miz­ers. Whether you want to re­lax and have fun, or pay some ex­tra money for a feel­ing of cer­tainty, de­pends on whether you care more about satis­fy­ing your in­tu­itions or ac­tu­ally achiev­ing the goal.

If you’re gam­bling at Las Ve­gas for fun, then by all means, don’t think about the ex­pected util­ity—you’re go­ing to lose money any­way.

But what if it were 24,000 lives at stake, in­stead of $24,000? The cer­tainty effect is even stronger over hu­man lives. Will you pay one hu­man life to throw the switch, and an­other to switch it back?

Tol­er­at­ing prefer­ence re­ver­sals makes a mock­ery of claims to op­ti­miza­tion. If you drive from San Jose to San Fran­cisco to Oak­land to San Jose, over and over again, then you may get a lot of warm fuzzy feel­ings out of it, but you can’t be in­ter­preted as hav­ing a des­ti­na­tion—as try­ing to go some­where.

When you have cir­cu­lar prefer­ences, you’re not steer­ing the fu­ture—just run­ning in cir­cles. If you en­joy run­ning for its own sake, then fine. But if you have a goal—some­thing you’re try­ing to ac­tu­ally ac­com­plish—a prefer­ence re­ver­sal re­veals a big prob­lem. At least one of the choices you’re mak­ing must not be work­ing to ac­tu­ally op­ti­mize the fu­ture in any co­her­ent sense.

If what you care about is the warm fuzzy feel­ing of cer­tainty, then fine. If some­one’s life is at stake, then you had best re­al­ize that your in­tu­itions are a greasy lens through which to see the world. Your feel­ings are not pro­vid­ing you with di­rect, veridi­cal in­for­ma­tion about strate­gic con­se­quences—it feels that way, but they’re not. Warm fuzzies can lead you far astray.

There are math­e­mat­i­cal laws gov­ern­ing effi­cient strate­gies for steer­ing the fu­ture. When some­thing truly im­por­tant is at stake—some­thing more im­por­tant than your feel­ings of hap­piness about the de­ci­sion—then you should care about the math, if you truly care at all.