# Zut Allais!

Huh! I was not ex­pect­ing that re­sponse. Looks like I ran into an in­fer­en­tial dis­tance.

It prob­a­bly helps in in­ter­pret­ing the Allais Para­dox to have ab­sorbed more of the gestalt of the field of heuris­tics and bi­ases, such as:

• Ex­per­i­men­tal sub­jects tend to defend in­co­her­ent prefer­ences even when they’re re­ally silly.

• Peo­ple put very high val­ues on small shifts in prob­a­bil­ity away from 0 or 1 (the cer­tainty effect).

Let’s start with the is­sue of in­co­her­ent prefer­ences—prefer­ence re­ver­sals, dy­namic in­con­sis­tency, money pumps, that sort of thing.

Any­one who knows a lit­tle prospect the­ory will have no trou­ble con­struct­ing cases where peo­ple say they would pre­fer to play gam­ble A rather than gam­ble B; but when you ask them to price the gam­bles they put a higher value on gam­ble B than gam­ble A. There are differ­ent per­cep­tual fea­tures that be­come salient when you ask “Which do you pre­fer?” in a di­rect com­par­i­son, and “How much would you pay?” with a sin­gle item.

My books are packed up for the move, but from what I re­mem­ber, this should typ­i­cally gen­er­ate a prefer­ence re­ver­sal:

1. 13 to win \$18 and 23 to lose \$1.50

2. 1920 to win \$4 and 120 to lose \$0.25

Most peo­ple will (IIRC) rather play 2 than 1. But if you ask them to price the bets sep­a­rately—ask for a price at which they would be in­differ­ent be­tween hav­ing that amount of money, and hav­ing a chance to play the gam­ble—peo­ple will (IIRC) put a higher price on 1 than on 2. If I’m wrong about this ex­act ex­am­ple, nonethe­less, there are plenty of cases where such a pat­tern is ex­hibited ex­per­i­men­tally.

So first you sell them a chance to play bet 1, at their stated price. Then you offer to trade bet 1 for bet 2. Then you buy bet 2 back from them, at their stated price. Then you do it again. Hence the phrase, “money pump”.

Or to para­phrase Steve Omo­hun­dro: If you would rather be in Oak­land than San Fran­cisco, and you would rather be in San Jose than Oak­land, and you would rather be in San Fran­cisco than San Jose, you’re go­ing to spend an awful lot of money on taxi rides.

Amaz­ingly, peo­ple defend these prefer­ence pat­terns. Some sub­jects aban­don them af­ter the money-pump effect is pointed out—re­vise their price or re­vise their prefer­ence—but some sub­jects defend them.

On one oc­ca­sion, gam­blers in Las Ve­gas played these kinds of bets for real money, us­ing a roulette wheel. And af­ter­ward, one of the re­searchers tried to ex­plain the prob­lem with the in­co­her­ence be­tween their pric­ing and their choices. From the tran­script:

Ex­per­i­menter: Well, how about the bid for Bet A? Do you have any fur­ther feel­ings about it now that you know you are choos­ing one but bid­ding more for the other one?
Sub­ject: It’s kind of strange, but no, I don’t have any feel­ings at all what­so­ever re­ally about it. It’s just one of those things. It shows my rea­son­ing pro­cess isn’t so good, but, other than that, I… no qualms.
...
E: Can I per­suade you that it is an ir­ra­tional pat­tern?
S: No, I don’t think you prob­a­bly could, but you could try.
...
E: Well, now let me sug­gest what has been called a money-pump game and try this out on you and see how you like it. If you think Bet A is worth 550 points [points were con­verted to dol­lars af­ter the game, though not on a one-to-one ba­sis] then you ought to be will­ing to give me 550 points if I give you the bet...
...
E: So you have Bet A, and I say, “Oh, you’d rather have Bet B wouldn’t you?”
...
S: I’m los­ing money.
E: I’ll buy Bet B from you. I’ll be gen­er­ous; I’ll pay you more than 400 points. I’ll pay you 401 points. Are you will­ing to sell me Bet B for 401 points?
S: Well, cer­tainly.
...
E: I’m now ahead 149 points.
S: That’s good rea­son­ing on my part. (laughs) How many times are we go­ing to go through this?
...
E: Well, I think I’ve pushed you as far as I know how to push you short of ac­tu­ally in­sult­ing you.
S: That’s right.

You want to scream, “Just give up already! In­tu­ition isn’t always right!”

And then there’s the busi­ness of the strange value that peo­ple at­tach to cer­tainty. Again, I don’t have my books, but I be­lieve that one ex­per­i­ment showed that a shift from 100% prob­a­bil­ity to 99% prob­a­bil­ity weighed larger in peo­ple’s minds than a shift from 80% prob­a­bil­ity to 20% prob­a­bil­ity.

The prob­lem with at­tach­ing a huge ex­tra value to cer­tainty is that one time’s cer­tainty is an­other time’s prob­a­bil­ity.

Yes­ter­day I talked about the Allais Para­dox:

• 1A. \$24,000, with cer­tainty.

• 1B. 3334 chance of win­ning \$27,000, and 134 chance of win­ning noth­ing.

• 2A. 34% chance of win­ning \$24,000, and 66% chance of win­ning noth­ing.

• 2B. 33% chance of win­ning \$27,000, and 67% chance of win­ning noth­ing.

The naive prefer­ence pat­tern on the Allais Para­dox is 1A > 1B and 2B > 2A. Then you will pay me to throw a switch from A to B be­cause you’d rather have a 33% chance of win­ning \$27,000 than a 34% chance of win­ning \$24,000. Then a die roll elimi­nates a chunk of the prob­a­bil­ity mass. In both cases you had at least a 66% chance of win­ning noth­ing. This die roll elimi­nates that 66%. So now op­tion B is a 3334 chance of win­ning \$27,000, but op­tion A is a cer­tainty of win­ning \$24,000. Oh, glo­ri­ous cer­tainty! So you pay me to throw the switch back from B to A.

Now, if I’ve told you in ad­vance that I’m go­ing to do all that, do you re­ally want to pay me to throw the switch, and then pay me to throw it back? Or would you pre­fer to re­con­sider?

When­ever you try to price a prob­a­bil­ity shift from 24% to 23% as be­ing less im­por­tant than a shift from ~1 to 99% - ev­ery time you try to make an in­cre­ment of prob­a­bil­ity have more value when it’s near an end of the scale—you open your­self up to this kind of ex­ploita­tion. I can always set up a chain of events that elimi­nates the prob­a­bil­ity mass, a bit at a time, un­til you’re left with “cer­tainty” that flips your prefer­ences. One time’s cer­tainty is an­other time’s un­cer­tainty, and if you in­sist on treat­ing the dis­tance from ~1 to 0.99 as spe­cial, I can cause you to in­vert your prefer­ences over time and pump some money out of you.

Can I per­suade you, per­haps, that this is an ir­ra­tional pat­tern?

Surely, if you’ve been read­ing this blog for a while, you re­al­ize that you—the very sys­tem and pro­cess that reads these very words—are a flawed piece of ma­chin­ery. Your in­tu­itions are not giv­ing you di­rect, veridi­cal in­for­ma­tion about good choices. If you don’t be­lieve that, there are some gam­bling games I’d like to play with you.

There are var­i­ous other games you can also play with cer­tainty effects. For ex­am­ple, if you offer some­one a cer­tainty of \$400, or an 80% prob­a­bil­ity of \$500 and a 20% prob­a­bil­ity of \$300, they’ll usu­ally take the \$400. But if you ask peo­ple to imag­ine them­selves \$500 richer, and ask if they would pre­fer a cer­tain loss of \$100 or a 20% chance of los­ing \$200, they’ll usu­ally take the chance of los­ing \$200. Same prob­a­bil­ity dis­tri­bu­tion over out­comes, differ­ent de­scrip­tions, differ­ent choices.

Yes, Virginia, you re­ally should try to mul­ti­ply the util­ity of out­comes by their prob­a­bil­ity. You re­ally should. Don’t be em­bar­rassed to use clean math.

In the Allais para­dox, figure out whether 1 unit of the differ­ence be­tween get­ting \$24,000 and get­ting noth­ing, out­weighs 33 units of the differ­ence be­tween get­ting \$24,000 and \$27,000. If it does, pre­fer 1A to 1B and 2A to 2B. If the 33 units out­weigh the 1 unit, pre­fer 1B to 1A and 2B to 2A. As for calcu­lat­ing the util­ity of money, I would sug­gest us­ing an ap­prox­i­ma­tion that as­sumes money is log­a­r­ith­mic in util­ity. If you’ve got plenty of money already, pick B. If \$24,000 would dou­ble your ex­ist­ing as­sets, pick A. Case 2 or case 1, makes no differ­ence. Oh, and be sure to as­sess the util­ity of to­tal as­set val­ues—the util­ity of fi­nal out­come states of the world—not changes in as­sets, or you’ll end up in­con­sis­tent again.

A num­ber of com­menters, yes­ter­day, claimed that the prefer­ence pat­tern wasn’t ir­ra­tional be­cause of “the util­ity of cer­tainty”, or some­thing like that. One com­menter even wrote U(Cer­tainty) into an ex­pected util­ity equa­tion.

Does any­one re­mem­ber that whole busi­ness about ex­pected util­ity and util­ity be­ing of fun­da­men­tally differ­ent types? Utilities are over out­comes. They are val­ues you at­tach to par­tic­u­lar, solid states of the world. You can­not feed a prob­a­bil­ity of 1 into a util­ity func­tion. It makes no sense.

And be­fore you sniff, “Hmph… you just want the math to be neat and tidy,” re­mem­ber that, in this case, the price of de­part­ing the Bayesian Way was pay­ing some­one to throw a switch and then throw it back.

But what about that solid, warm feel­ing of re­as­surance? Isn’t that a util­ity?

That’s be­ing hu­man. Hu­mans are not ex­pected util­ity max­i­miz­ers. Whether you want to re­lax and have fun, or pay some ex­tra money for a feel­ing of cer­tainty, de­pends on whether you care more about satis­fy­ing your in­tu­itions or ac­tu­ally achiev­ing the goal.

If you’re gam­bling at Las Ve­gas for fun, then by all means, don’t think about the ex­pected util­ity—you’re go­ing to lose money any­way.

But what if it were 24,000 lives at stake, in­stead of \$24,000? The cer­tainty effect is even stronger over hu­man lives. Will you pay one hu­man life to throw the switch, and an­other to switch it back?

Tol­er­at­ing prefer­ence re­ver­sals makes a mock­ery of claims to op­ti­miza­tion. If you drive from San Jose to San Fran­cisco to Oak­land to San Jose, over and over again, then you may get a lot of warm fuzzy feel­ings out of it, but you can’t be in­ter­preted as hav­ing a des­ti­na­tion—as try­ing to go some­where.

When you have cir­cu­lar prefer­ences, you’re not steer­ing the fu­ture—just run­ning in cir­cles. If you en­joy run­ning for its own sake, then fine. But if you have a goal—some­thing you’re try­ing to ac­tu­ally ac­com­plish—a prefer­ence re­ver­sal re­veals a big prob­lem. At least one of the choices you’re mak­ing must not be work­ing to ac­tu­ally op­ti­mize the fu­ture in any co­her­ent sense.

If what you care about is the warm fuzzy feel­ing of cer­tainty, then fine. If some­one’s life is at stake, then you had best re­al­ize that your in­tu­itions are a greasy lens through which to see the world. Your feel­ings are not pro­vid­ing you with di­rect, veridi­cal in­for­ma­tion about strate­gic con­se­quences—it feels that way, but they’re not. Warm fuzzies can lead you far astray.

There are math­e­mat­i­cal laws gov­ern­ing effi­cient strate­gies for steer­ing the fu­ture. When some­thing truly im­por­tant is at stake—some­thing more im­por­tant than your feel­ings of hap­piness about the de­ci­sion—then you should care about the math, if you truly care at all.

• You can­not feed a prob­a­bil­ity of 1 into a util­ity func­tion. It makes no sense.

I think “U(Cer­tainty)” was meant to be short­hand for U(feel­ing of cer­tainty). Other­wise—well said.

• Hmmm.… I thought the point of your ar­ti­cle at http://​​less­wrong.com/​​lw/​​mp/​​0_and_1_are_not_prob­a­bil­ities/​​ was that the differ­ence be­tween 1 and .99 was in­deed much larger than, say, .48 and .49.

Any­way, what if we try this one on for size: let’s say you are go­ing to play a hand of Texas Hold ’em and you can choose one of the fol­low­ing three hands (none of them are suited): AK, JT, or 22. If we say that hand X > Y if hand X will win against hand Y more that 50% of the time, then AK > JT > 22 > AK > JT ….. etc. So in this case couldn’t one choose ra­tio­nally and yet still be a “money pump”?

• No, be­cause you don’t want to switch to a hand that will beat the one you have, you want to switch to one that’s more likely to beat your op­po­nent’s (un­known, fixed) hand. That’s nec­es­sar­ily tran­si­tive.

• Okay, just one more ques­tion, Eliezer: when are you go­ing to sit down and con­dense your work at Over­com­ing Bias into a rea­son­ably com­pact New York Times best­sel­ler?

• Hmmm.… I thought the point of your ar­ti­cle at http://​​less­wrong.com/​​lw/​​mp/​​0_and_1_are_not_prob­a­bil­ities/​​ was that the differ­ence be­tween 1 and .99 was in­deed much larger than, say, .48 and .49.

Heh, I won­dered if some­one would bring that up.

You have to use the right dis­tance mea­sure for the right pur­pose. The co­her­ence proofs on Bayes’s The­o­rem show that if you want the dis­tance be­tween prob­a­bil­ities to equal the amount of ev­i­dence re­quired to shift be­tween them, you have no choice but to use the log odds.

What the co­her­ence proofs for the ex­pected util­ity equa­tion show, is more sub­tle. Roughly, the “dis­tance” be­tween prob­a­bil­ities cor­re­sponds to the amount of one out­come-shift that you need to com­pen­sate for an­other out­come-shift. If one unit of prob­a­bil­ity goes from an out­come of “cur­rent wealth + \$24,000″ to an out­come of “cur­rent wealth”, how many units of prob­a­bil­ity shift­ing from “cur­rent + \$24K” to “cur­rent + \$27K” do you need to make up for that? What the co­her­ence proofs for ex­pected util­ity show, and the point of the Allais para­dox, is that the in­var­i­ant mea­sure of dis­tance be­tween prob­a­bil­ities for this pur­pose is the usual mea­sure be­tween 0 and 1. That is, the dis­tance be­tween ~0 and 0.01, or 0.33 and 0.34, or 0.99 and ~1, are all the same dis­tance.

You’ve got to use the right prob­a­bil­ity met­ric to pre­serve the right in­var­i­ance rel­a­tive to the right trans­for­ma­tion.

Other­wise, shift­ing you in time (by giv­ing you more in­for­ma­tion, for ex­am­ple about the roll of a die) will shift your per­ceived dis­tances, and your prefer­ences will switch, turn­ing you into a money pump.

Neat, huh?

Okay, just one more ques­tion, Eliezer: when are you go­ing to sit down and con­dense your work at Over­com­ing Bias into a rea­son­ably com­pact New York Times best­sel­ler?

The key word is com­pact. It’s a funny thing, but I have to write all these ex­tra de­tails on the blog be­fore I can leave them out of the book. Other­wise, they’ll burst out into the text and get in the way.

So the an­swer is “not yet”—there are still too many things I would be tempted to say in the book, if I didn’t say them here.

• “What the co­her­ence proofs for ex­pected util­ity show, and the point of the Allais para­dox, is that the in­var­i­ant mea­sure of dis­tance be­tween prob­a­bil­ities for this pur­pose is the usual mea­sure be­tween 0 and 1. That is, the dis­tance be­tween ~0 and 0.01, or 0.33 and 0.34, or 0.99 and ~1, are all the same dis­tance.”

In this ex­am­ple. If it had been the differ­ence be­tween .99 and 1, rather than 3334 and 1, then un­der nor­mal util­ity of money func­tions, it would be rea­son­able to pre­fer A in the one case and B in the other. But that differ­ence can’t be du­pli­cated by the money pump you choose. The ra­tios of prob­a­bil­ity are what mat­ter for this. 3334 to 1 is the same ra­tio as .33 to .34.

So it turns out that log odds is the right an­swer here also. If the differ­ence in the log odds is the same, then the bet is es­sen­tially the same.

• Far more im­por­tant than ra­tio­nal­ity is the story of who we are and what we be­lieve. I think that may be the best ra­tio­nal ex­pla­na­tion for your in­sis­tence on try­ing to con­vince peo­ple that ra­tio­nal­ity is a good thing. It’s your story and it ob­vi­ously means a lot to you.

There is no spe­cial ra­tio­nal ba­sis for claiming that when lives are at stake, it’s es­pe­cially im­por­tant to be ra­tio­nal, be­cause the value we place on lives is be­yond ra­tio­nal con­trol or as­sess­ment. But there may be any num­ber of non-ra­tio­nal rea­sons to be ra­tio­nal… or ap­pear ra­tio­nal, any­way.

Ra­tion­al­ity is a game. It’s a game I, per­son­ally, like to play. Ir­ra­tional­ity is how hu­mans ac­tu­ally live and ex­pe­rience the world, most of the time.

• James Bach, your point and EY’s are not in­com­pat­i­ble : it is a given that what you care about and give im­por­tance to is sub­jec­tive and ir­ra­tional, how­ever hav­ing cho­sen what out­comes you care about, your best road to achiev­ing them must be Bayesian.… per­haps. My prob­lem with this whole Bayesian kick is that it re­minds me of putting three masts and a full set of square-rigged sails on what is ba­si­cally a ca­noe : the masts and sails are the Bayesian ed­ifice, the ca­noe is our use­ful knowl­edge in any given real life situ­a­tion.

• tcp­kac—that’s what they said to Colum­bus.

The cir­cu­lar money pump kept bring­ing M C Escher illus­tra­tions to my mind—the never-end­ing stair­case in par­tic­u­lar. This post cleared up a lot of what I didn’t take in yes­ter­day—thanks for tak­ing the time.

There is no spe­cial ra­tio­nal ba­sis for claiming that when lives are at stake, it’s es­pe­cially im­por­tant to be ra­tio­nal.

James—the rea­son ‘lives at stake’ comes up in ex­am­ples is be­cause the value we place on hu­man life tends to dwarf ev­ery­thing else. Just be­cause that value is enor­mous, doesn’t mean it’s un­calcu­la­ble. Con­sid­er­ing lives is the best way to force our­selves to think as eco­nom­i­cally as we can—more so than money (many peo­ple are rich already). It may give us a pang of cold, in­hu­man log­i­cal­ity to sit down with a piece of pa­per and a pen to work out the best way to save lives, but that’s the game.

• Just a quick cor­rec­tion-

Ex­per­i­men­tal sub­jects Ex­per­i­menters tend to defend at­tack in­co­her­ent prefer­ences even when they’re re­ally silly strongly held.

;->

I guess what I’d like to know is whether you are a) try­ing to figure out what peo­ple do, or b) try­ing to pre­dict out­comes and then tell peo­ple what to do? De­spite my slightly snarky tone, as a cu­ri­ous out­sider I re­ally am cu­ri­ous as to which you take to be your goal. Com­ing from a sci­ence-y back­ground, I can to­tally un­der­stand b), but life has shown me plenty of in­stances of peo­ple ac­ing con­trary to b)’s pre­dic­tions.

• Maybe the rea­son we tend to choose bet 2 over bet 1 (be­fore com­put­ing the ac­tual ex­pected win­nings) is not the higher prob­a­bil­ity to win, but the smaller sum we can lose (ei­ther we ex­pect to lose or we can lose at worst, I’m not sure about that). So the bias here could be more some­thing along the lines of sta­tus quo bias or en­down­ment effect than a need for cer­tainty.

I can only speak for my­self, but I do not in­tu­itively value cer­tainty/​high prob­a­bil­ity of win­ning, while I am bi­ased to­wards avoid­ing losses.

• tcp­kac—that’s what they said to Colum­bus.

Colum­bus was an idiot who screwed up his rea­son­ing, didn’t find what he was look­ing for, was saved only by the ex­is­tence of some­thing en­tirely differ­ent, and died with­out ever figur­ing out that what he found wasn’t what he sought.

• Hear, hear!

Although I un­der­stand he had his coat of arms, which fea­tured some is­lands, up­dated to fea­ture a con­ti­nent, which may sug­gest he figured it out at some point. Didn’t do him much good though—you left out the bit where he tor­tured the na­tives on the is­land he was gov­er­nor of when they couldn’t give him and got him­self fired and shipped back to Spain.

• I don’t think the pos­si­bil­ity of a money-pump is always a knock-down re­duc­tio. It re­ally only makes my prefer­ences seem fool­ish in the long-run. But there isn’t a long run here: it’s a once-in-a-life­time deal. If you told me that you would make me the same offer to me thou­sands of time, I would of course do the clean math that you sug­gest.

Sup­pose you are deathly thirsty, have only \$1 in your pocket, and find your­self fac­ing two bot­tled-wa­ter ma­chines: The first would dis­pense a bot­tle with cer­tainty for the full dol­lar, and the sec­ond would do so with a prob­a­bil­ity and price such that “clean math” sug­gests it is the slightly more ra­tio­nal choice. Etc.

• The ra­tio­nal choice would be the one that re­sults in the high­est ex­pected util­ity. In this case, it wouldn’t nec­es­sar­ily be the one with the high­est ex­pected amount of wa­ter. This is be­cause the first bot­tle of wa­ter is worth far more then the sec­ond.

The amount of money you make over your life­time dwarfs the amount you make in these ex­am­ples. The ex­pected util­ity of the money isn’t go­ing to change much.

It seems hard to be­lieve that the op­tion of go­ing from B to C and then from C to A would change whether or not it’s a good idea. After all, you can always go from A to B and then re­fuse to change. Then there’d be no long run. Of course, once you’ve done that, you might as well go from B to C and stop there, etc.

• On sec­ond thought, strike my sec­ond para­graph (in the 10:08 am com­ment).

I shouldn’t have tin­kered with the (well-known?) ex­am­ple I first heard. It’s a coke ma­chine on a mil­i­tary base that in­tro­duced a price in­crease by dis­pens­ing cokes with a lower prob­a­bil­ity rather than with a nom­i­nal price in­crease. To the sol­diers that live and work there, the ma­chine is equiv­a­lent to one with a nom­i­nal price in­crease. The philoso­pher’s ques­tion is whether this ma­chine is fair to some­one who is just pass­ing through, some­one for whom there is no long run.

What I mean to sug­gest by the ex­am­ple is that the chance deals you offer peo­ple do not have a long-run, and your scheme of ra­tio­nal choice is hard to jus­tify with­out a long-run, no? Can you say some­thing about this, Eliezer?

• To the sol­diers that live and work there, the ma­chine is equiv­a­lent to one with a nom­i­nal price in­crease.

No, it isn’t. It’s pos­si­ble that ran­dom hap­pen­stance could deny some peo­ple more cans than oth­ers. The set of out­comes only be­comes equiv­a­lent to rais­ing the price as the num­ber of uses in­creases to in­finity—and that’s never as­sum­able in the real world.

It’s not fair to the peo­ple that live and work there, ei­ther.

• “Not yet”

So it’s go­ing to hap­pen even­tu­ally! Yay!

Back on topic, I sec­ond Lee’s thoughts. My abil­ity to do a sim­ple ex­pected util­ity calcu­la­tion would pre­vent me from ever tak­ing op­tion (1) given a choice be­tween the two, but if I badly needed the \$4 for some­thing I might take it. (Ini­tially hard to think of how such a sce­nario could arise, but \$4 might be enough to con­tact some­one who would be will­ing to bail me out of what­ever trou­ble I was in.)

• Dagon made a point about the so­cial im­por­tance of guaran­tees. If a promise is bro­ken, you know you have been cheated. If you are per­suaded that there is only a 10% chance of los­ing your in­vest­ment and you are un­lucky, what do you know?

I doubt that we can over come our bi­ases by fo­cus­ing on what they are bad for and how they hurt us. We also need to think about what they are good for and why we have them.

• Long run? What? Which ex­actly equiv­a­lent ran­dom events are you go­ing to ex­pe­rience more than once? And if the events are only re­ally close to equiv­a­lent, how do you jus­tify say­ing that 30 one-time shots at com­pletely differ­ent ways of gain­ing 1 util­ity unit is a fun­da­men­tally differ­ent thing than a nearly-ex­actly-re­peated game where you have 30 chances to gain 1 util­ity unit each time?

• There is noth­ing ir­ra­tional about choos­ing 1A over 1B or choos­ing 2B over 2A. Com­bin­ing the two into a sin­gle scheme or an iter­ated choice are to­tally differ­ent situ­a­tions from the origi­nal propo­si­tion.

Too much re­search on cog­ni­tion, es­pe­cially bi­ases, tends to in­fer too much from sim­plified ex­per­i­ments. True, in this case many peo­ple slip into a money pump situ­a­tion eas­ily, but the origi­nal propo­si­tion does not re­quire that to oc­cur.

• Con­text is king.

This sort of dilemma de­pends on con­text. Some may have been cheated in the past, so cer­tainty is valuable to them. Others may need ex­actly \$24,000, and oth­ers may need ex­actly \$27,000 for a larger (higher util­ity) pur­pose. Others may have differ­ent risk tol­er­ance.

You may ar­gue that, given only this de­ci­sion and no out­side in­fluences, a per­son would be ir­ra­tional to choose a par­tic­u­lar way. Un­for­tu­nately, you will never find a rea­son­ing be­ing with­out con­text.

This is ex­actly the Bayesian way. Pre­vi­ous ex­pe­rience defines what is cur­rently ra­tio­nal. Later ex­pe­rience may show the ear­lier ac­tions to have been im­perfect, or un­wise to re­peat. But to say that we are ir­ra­tional be­cause we are bas­ing our de­ci­sion on our own per­sonal con­text is to deny ev­ery­thing that you have built up to this point.

Con­text is ev­ery­thing fol­low­ing, “E(X given ”. Do not deny the value of it by as­sert­ing that, in one spe­cific in­stance, it mis­lead us. We may learn from ad­di­tional data, but it did not mis­lead us.

• I see the im­por­tance of con­text gen­er­ally over­looked in this dis­cus­sion, as in most dis­cus­sion of ra­tio­nal­ity (en­com­pass­ing dis­cus­sion of ra­tio­nal meth­ods.) The el­e­gance and ap­pli­ca­bil­ity of Bayesian in­fer­ence is to me un­de­ni­able, but I look for­ward to broader dis­cus­sion of its ap­pli­ca­tion within sys­tems of effec­tive de­ci­sion-mak­ing en­tailing pre­dic­tion within a con­text which is not only un­cer­tain but evolv­ing. In other words, con­sid­er­a­tion of prin­ci­ples of effec­tive agency where the game it­self is in­her­ently un­cer­tain.

In do­ing so, I think we are driven to re-frame our think­ing away from goals to be achieved and onto val­ues to be pro­moted, away from rules to be fol­lowed and onto in­creas­ingly co­her­ent prin­ci­ples to be ap­plied, and away from max­i­miz­ing ex­pected util­ity and onto max­i­miz­ing po­ten­tial syn­er­gies within a game that is con­sis­tent but in­her­ently in­com­plete. I see Bayes as nec­es­sary but not suffi­cient for this more en­com­pass­ing view.

• Busi­nessCon­sul­tant: But to say that we are ir­ra­tional be­cause we are bas­ing our de­ci­sion on our own per­sonal con­text is to deny ev­ery­thing that you have built up to this point. Really? If a de­ci­sion is ir­ra­tional, it’s ir­ra­tional. You can make al­lowances for cir­cum­stance and still at­tempt to find the most ra­tio­nal choice. Did you read the whole post? Eliezer is at pains to point out that even given differ­ent ex­pected util­ities for differ­ent amounts of money for differ­ent peo­ple in differ­ent cir­cum­stances, there is still a ra­tio­nal way to go about mak­ing a de­ci­sion and there is stilla ten­dency for hu­mans to make bad de­ci­sions be­cause they are too lazy (my words, not his) to think it through, in­stead trust­ing their “in­tu­ition” be­cause it “feels right.”

The point about pay­ing two hu­man lives to flip the switch and then switch it back re­ally drove home the point, Eliezer. Also, a good clar­ifi­ca­tion on con­sis­tency. Read­ing the ear­lier post, I also thought of the ob­jec­tion that \$24,000 could change a des­ti­tute per­son’s life by or­ders of mag­ni­tude, whereas \$3000 on top of that would not be equiv­a­lent to 18 more util­ity… the cru­cial differ­ence for a starv­ing, sick per­son is in, say, the first few grand.

But then, as you point out, your prefer­ence for the surer chance of less money should re­main con­sis­tent how­ever the game is stated. Thanks! Very clear...

Also, liv­ing in New York and long­ing for Seat­tle, I found my­self vis­it­ing Seat­tle for Christ­mas and long­ing for New York… hmmm. Maybe I just need a taxi to Oak­land. :P

• “A num­ber of com­menters, yes­ter­day, claimed that the prefer­ence pat­tern wasn’t ir­ra­tional be­cause of “the util­ity of cer­tainty”, or some­thing like that. One com­menter even wrote U(Cer­tainty) into an ex­pected util­ity equa­tion.”

It was not my in­tent to claim “the prefer­ence pat­tern wasn’t ir­ra­tional,” merely that your alge­braic mod­el­ing failed to cap­ture what many could ini­tially claim was a salient de­tail of the origi­nal prob­lem. I hope a reread of my origi­nal com­ment will find it plead­ing, apolo­getic, limited to the alge­braic con­struc­tion, and sincere.

I should have men­tioned that I thought the alge­braic mod­el­ing was a very el­e­gant way to show that the diminish­ing marginal util­ity of money was not at play. If that was its only pur­pose, then the rest of this is un­nec­es­sary, but I think you can use that con­struc­tion to do more, with a lit­tle work.

Here’s one pos­si­ble re­sponse to this ap­par­ent weak­ness in the alge­braic mod­el­ing:

If you can sim­ply as­sert that Allais’s point holds ex­per­i­men­tally for ar­bi­trar­ily in­creas­ing val­ues in place of \$24k and \$27k (which I’m sure you can), then we find this pro­posed “util­ity of cer­tainty” (or what­ever more ap­pro­pri­ate for­mu­la­tion you pre­fer*) in­creas­ing with no up­per bound. The no­tion that we value cer­tainty seems to hold in­tu­itive ap­peal, and I see noth­ing wrong with that on its face. But the no­tion that we value cer­tainty above all else is more starkly im­plau­si­ble (and I would sus­pect demon­stra­bly un­true: would you re­ally give your life just to be­come cer­tain of the out­come of a coin­flip?).

I was try­ing to make the ar­gu­ment stronger, not weaker, but I get the im­pres­sion I’ve some­how pissed all over it. My apolo­gies.

*I’ve read your post on Ter­mi­nal Values three times and haven’t yet grokked why I can’t feed things like knowl­edge or cer­tainty into a Utility func­tion. Cer­tainty seems like a “fixed, par­tic­u­lar state of the world,” it seems like an “out­come,” not a “ac­tion,” and most definitely un­like “1.” If the worry is that cer­tainty is an in­stru­men­tal value, not a ter­mi­nal value, why couldn’t one make the same ob­jec­tion of the \$24,000? Money has no in­her­ent value, it is valuable only be­cause it can be spent on things like choco­late pizza. You’ve since re­placed the money with lives, but was the origi­nal use of money an er­ror? I sus­pect not… but then what is the pre­cise prob­lem with U(Cer­tainty)?

I should clar­ify that, once again, I bring up these ob­jec­tions not to show where you’ve gone wrong, but to show where I’m hav­ing difficul­ties in un­der­stand­ing. I hope you’ll con­sider these com­ments a use­ful guide as to where you might go more slowly in your ar­gu­ments for the benefit of your read­ers (like my­self) who are a bit dull, and I hope you do not read these com­ments as com­bat­ive, or de­serv­ing of some kind of ex­co­ri­at­ing re­ply.

I’ll keep go­ing over the Ter­mi­nal Values post to see if I can get it to click.

• There is a cer­tain U(cer­tainty) in a game, al­though there might be bet­ter ways to ex­press it math­e­mat­i­cally. How do you know the per­son host­ing the game isn’t ly­ing to you an re­ally op­er­at­ing un­der the al­gorithm: 1A. Give him \$24,000 be­cause I have no choice. 1B. Tell him he had a chance to win but lost and give noth­ing.

In the sec­ond situ­a­tion(2A 2B) both op­tions are prob­a­bil­ities and so the player has no choice but to trust the game host.

Also, I am still fuzzy on the whole “money pump” con­cept. “The naive prefer­ence pat­tern on the Allais Para­dox is 1A > 1B and 2B > 2A. Then you will pay me to throw a switch from A to B be­cause you’d rather have a 33% chance of win­ning \$27,000 than a 34% chance of win­ning \$24,000.”

Ok, I pay you one penny. You might be trick­ing me out of one penny(in case you already de­cided to give me noth­ing) but I’m will­ing to take that risk.

“Then a die roll elimi­nates a chunk of the prob­a­bil­ity mass. In both cases you had at least a 66% chance of win­ning noth­ing. This die roll elimi­nates that 66%. So now op­tion B is a 3334 chance of win­ning \$27,000, but op­tion A is a cer­tainty of win­ning \$24,000. Oh, glo­ri­ous cer­tainty! So you pay me to throw the switch back from B to A.”

Yes yes yes, I pay you 1 penny. You now owe me \$24,000. What? You want to some­how go back to a 2A 2B situ­a­tion again? No thanx. I would like to get my money now. Once you promised me money with cer­tainty you can­not in­ject un­cer­tainty back into the game with­out break­ing the rules.

I’m afraid there might still be some in­fer­en­tial dis­tance to cover Eliezer.

• How do the com­menters who jus­tify the usual de­ci­sions in the face of cer­tainty and un­cer­tainty with re­spect to gain and loss ac­count for this part of the post?

There are var­i­ous other games you can also play with cer­tainty effects. For ex­am­ple, if you offer some­one a cer­tainty of \$400, or an 80% prob­a­bil­ity of \$500 and a 20% prob­a­bil­ity of \$300, they’ll usu­ally take the \$400. But if you ask peo­ple to imag­ine them­selves \$500 richer, and ask if they would pre­fer a cer­tain loss of \$100 or a 20% chance of los­ing \$200, they’ll usu­ally take the chance of los­ing \$200. Same prob­a­bil­ity dis­tri­bu­tion over out­comes, differ­ent de­scrip­tions, differ­ent choices.

As­sum­ing that this ex­per­i­ment has ac­tu­ally been val­i­dated, there’s hardly a clearer ex­am­ple of ob­vi­ous bias than a per­son’s de­ci­sion on the ex­act same cir­cum­stance be­ing de­ter­mined by whether it’s de­scribed as cer­tain vs. un­cer­tain gain or cer­tain vs. un­cer­tain loss.

And Eliezer, I have to com­pli­ment your writ­ing skills: when faced with peo­ple posit­ing a util­ity of cer­tainty, the first thing that came to my mind was the ir­ra­tional scale in­var­i­ance such a con­cept must have if it fulfills the stated role. But if you’d just stated that, peo­ple would have ar­gued to Judg­ment Day on nu­ances of the idea, try­ing to sal­vage it. In­stead, you un­der­cut the coun­ter­ar­gu­ment with a con­crete re­duc­tio ad ab­sur­dum, re­plac­ing \$24,000 with 24,000 lives- which you re­al­ized would make your in­ter­locu­tors un­com­fortable about mak­ing an in­cor­rect de­ci­sion for the sake of a state of mind. You seem to have ap­plied a vi­tal prin­ci­ple: we gen­er­ally change our minds not when a good ar­gu­ment is pre­sented to us, but when it makes us un­com­fortable by show­ing how our ex­ist­ing in­tu­itions con­flict.

If and when you pub­lish a book, if the writ­ing is of this qual­ity, I’ll recom­mend it to the heav­ens.

• there’s hardly a clearer ex­am­ple of ob­vi­ous bias than a per­son’s de­ci­sion on the ex­act same cir­cum­stance be­ing de­ter­mined by whether it’s de­scribed as cer­tain vs. un­cer­tain gain or cer­tain vs. un­cer­tain loss.

But it’s not the ex­act same cir­cum­stance. You are ig­nor­ing the fun­da­men­tal differ­ence be­tween the two con­di­tions.

• But it’s not the ex­act same cir­cum­stance. You are ig­nor­ing the fun­da­men­tal differ­ence be­tween the two con­di­tions.

Show us. Use maths.

• Ben Jones, and Pa­trick (or­thonor­mal), if you offer me 400\$ I’ll say ‘yes, thank you’. If you offer me 500\$ I’ll say ‘yes, thank you’. If, from what­ever my cur­rent po­si­tion is af­ter you’ve been so gen­er­ous, you ask me to choose be­tween “a cer­tain loss of \$100 or a 20% chance of los­ing \$200”, I’ll choose the 20% chance of los­ing 200\$. That’s my math, and I ac­cept money or­ders, wire trans­fers, or cash....

• You come out with the same amount of money, but a differ­ent thing hap­pens to get you there. This mat­ters emo­tion­ally, even though it shouldn’t (or seems like it shouldn’t). A util­ity func­tion can take things other than money into ac­count, you know.

• Show us. Use maths.

The dis­tinc­tion already ex­ists in the nat­u­ral lan­guage used to de­scribe the two sce­nar­ios.

In one sce­nario, we are told that a cer­tain amount of money will be­come ours, but we do not yet pos­sess it. In the other, we con­sider our­selves to already pos­sess the money and are given op­por­tu­ni­ties to risk some of it.

Hy­po­thet­i­cal money is not treated as equiv­a­lent to pos­sessed money. (Well, hy­po­thet­i­cal hy­po­thet­i­cal vs. pos­sessed hy­po­thet­i­cal in the ex­per­i­ment dis­cussed, but you know what I mean.)

• This mat­ters emo­tion­ally, even though it shouldn’t (or seems like it shouldn’t).

Hy­po­thet­i­cal money is not treated as equiv­a­lent to pos­sessed money.

My point ex­actly. It’s perfectly un­der­stand­able that we’ve evolved a “bird in the hand/​two in the bush” heuris­tic, be­cause it makes for good de­ci­sions in many com­mon con­texts; but that doesn’t pre­vent it from lead­ing to bad de­ci­sions in other con­texts. And we should try to over­come it in situ­a­tions where the ac­tual out­come is of great value to us.

A util­ity func­tion can take things other than money into ac­count, you know.

As well it should. But how large should you set the util­ities of psy­chol­ogy that make you treat two de­scrip­tions of the same set of out­comes differ­ently? Large enough to ac­count for a differ­ence of \$100 in ex­pected value? \$10,000? 10,000 lives?

At some point, you have to stop rely­ing on that heuris­tic and do the math if you care about mak­ing the right de­ci­sion.

• But how large should you set the util­ities of psy­chol­ogy that make you treat two de­scrip­tions of the same set of out­comes differ­ently?

As far as I’m con­cerned, zero; we agree on this. My point was only that it’s mis­lead­ing to say “the same set of out­comes” or “the same cir­cum­stance” for the same amount of money; a differ­ent thing hap­pens to get to the same mon­e­tary end­point. It’s not a differ­ence that I (or my ideal­ized self) care(s) about, though.

Similarly, I think it’s mis­lead­ing to say “choos­ing 1A and 2B is ir­ra­tional” with­out adding the caveat “if util­ity is solely a func­tion of money, not how you got that money”.

• “Do­ing the math” re­quires that we ac­cept a par­tic­u­lar model of util­ity and value, though. And this is why peo­ple are ob­ject­ing to Eliezer’s claims—he is im­plic­itly ap­ply­ing one model, and then act­ing as though no as­sump­tion was made.

• “There are var­i­ous other games you can also play with cer­tainty effects. For ex­am­ple, if you offer some­one a cer­tainty of \$400, or an 80% prob­a­bil­ity of \$500 and a 20% prob­a­bil­ity of \$300, they’ll usu­ally take the \$400. But if you ask peo­ple to imag­ine them­selves \$500 richer, and ask if they would pre­fer a cer­tain loss of \$100 or a 20% chance of los­ing \$200, they’ll usu­ally take the chance of los­ing \$200. Same prob­a­bil­ity dis­tri­bu­tion over out­comes, differ­ent de­scrip­tions, differ­ent choices.”

Ok lets rep­re­sent this more clearly. a1 − 100% chance to win \$400 a2 − 80% chance to win \$500 and 20% chance to win \$300

b1 − 100% chance to win \$500 and 100% chance to lose \$100 b2 − 100% chance to win \$500 and 20% chance to lose 200%

Lets write it out us­ing util­ity func­tions.

a1 − 100%U[\$400] a2 − 80%U[\$500] + 20%*U[\$300]

b1 − 100%U[\$500] + 100%U[-\$100]? b2 − 100%U[\$500] + 20%U[-200%}?

Wait a minute. The prob­a­bil­ities don’t add up to one. Maybe I haven’t phrased the de­scrip­tion cor­rectly. Lets try that again.

b1 − 100% chance to both win \$500 and lose \$100 b2 − 20% chance both win \$500 and to lose \$200, leav­ing an 80% chance to win \$500 and lose \$0

b1 − 100%U[\$500 - \$100] = 100%U[\$400] b2 − 20%U[\$500-\$200] + 80%[\$500-\$0] = 80%U[\$500] + 20%U[\$300]

This is ex­actly the same thing as a1 and a2. More im­por­tantly how­ever is that the \$500 is just a value used to calcu­late what to plug into the util­ity func­tion. The \$500 by it­self has no prob­a­bil­ity co­effi­cient and there­fore it’s ‘cer­tainty’ is ir­rele­vant to the prob­lem at hand. It’s a trick us­ing clever word­play to make one be­lieve there is a ‘cer­tainty’ when none is there. It’s not the same as the Allais para­dox.

As for the Allais para­dox, I’ll have to take an­other look at it later to­day.

• Eliezer, You need to spec­ify if it’s a one-time choice or if it will be re­peated. You need to spec­ify if lives or dol­lars are at stake. Th­ese things mat­ter.

• I think there are some places where it is ra­tio­nal to take this kind of bet the less-ex­pected-value way for a greater prob­a­bil­ity. Say you’re walk­ing along the street in tears be­cause mob­sters are go­ing to burn down your house and kill your fam­ily if you don’t pay back the \$20,000 you owe them and you don’t have the cash. Then some ran­dom billion­aire comes along and offers you ei­ther A. \$25,000 with prob­a­bil­ity 1 or B. \$75,000 with prob­a­bil­ity 50%. By naive mul­ti­pli­ca­tion, you should take the sec­ond bet, but here there’s a high ad­di­tional cost of failure which you might well want to avoid with high prob­a­bil­ity. (It be­comes a de­ci­sion about the util­ities of not pay­ing the mob vs. hav­ing X ad­di­tional money to send your kid to col­lege af­ter­wards. This has its own tip­ping point; but there’s a ra­tio­nal case to be made for tak­ing A over B.)

• This is why you should use ex­pected util­ity calcu­la­tions. The util­ity of \$20,000 also con­tains the util­ity of sav­ing your fam­ily’s lives (say \$1,650,000) and re­tain­ing a house (\$300,000), so choos­ing be­tween 100% chance of \$1,975,000 or 50% chance of \$2,025,000 is much eas­ier.

• Maybe I’m miss­ing some­thing ob­vi­ous, but doesn’t diminish­ing marginal util­ity play a big role here? After all, al­most all of us would pre­fer \$1,000,000 with cer­tainty to \$2,000,100 with 50% prob­a­bil­ity, and it would be perfectly ra­tio­nal to do so—not be­cause of the “util­ity of cer­tainty,” but be­cause \$2 mil­lion isn’t quite twice as good as \$1 mil­lion (for most peo­ple). But if you offered us this same choice a thou­sand times, we would prob­a­bly then take the \$20,000,100, be­cause the many coin flips would re­duce the var­i­ance enough to cre­ate a higher ex­pected util­ity, even with diminish­ing marginal re­turns. (If the math doesn’t quite seem to work out, you could prob­a­bly work out num­bers that would.)

So it seems at least plau­si­ble that you could con­struct ver­sions of the money pump prob­lem where you could ra­tio­nally pre­fer bid A to bid B in a one-off shot, but where you would then change your prefer­ence to bid B if offered mul­ti­ple times. Ob­vi­ously I’m not say­ing that’s what’s re­ally go­ing on—Allais para­dox surely does demon­strate a real and prob­le­matic in­con­sis­tency. But we shouldn’t con­clude from that it’s always ra­tio­nal to just “shut up and mul­ti­ply,” at least when we’re talk­ing about any­thing other than “raw” util­ity.

• “I can cause you to in­vert your prefer­ences over time and pump some money out of you.”

I think the small qual­ifier you slipped in there, “over time”, is more salient than it ap­pears at first.

Like most ca­su­ally in­tu­itive hu­mans, I’ll pre­fer 1A over 1B, and (for the sake of this ar­gu­ment) 2B over 2A, and you can pump some money out of me for a bit.

But… as a some­what ra­tio­nal thinker, you won’t be able to pump an un­bounded amount of money out of me. Even­tu­ally I catch on to what you’re do­ing and your trickle of cents will dis­ap­pear. I will go, “well, I don’t know what’s wrong with my fee­ble in­tu­ition, but I can tell that Elizer is go­ing to end up with all my money this way, so I’ll stop even though it goes against my in­tu­ition.” If you want to ac­cel­er­ate this, make the stuff worth more than a cent. Tell some­one that the “math­e­mat­i­cally wrong choice will cost you \$1,000,000″, and I bet they’ll take some time to think and choose a set of be­liefs that can’t be money-pumped.

Or, change the time as­pect. I sus­pect if I were im­mor­tal (or at least be­lieved my­self to be), I would hap­pily choose 1B over 1A, and cer­tainty be screwed. Maybe I don’t get the money, so what, I have an in­finite amount of time to earn it back. It’s the fact that I don’t get to play the game an un­limited amount of times that makes cer­tainty a more valuable as­pect.

• This ap­pears to be (to my limited knowl­edge of what sci­ence knows a well-known bias. But like most bi­ases, I think I can imag­ine oc­ca­sions when it serves as a heuris­tic.

The thought oc­curred to me be­cause I play mi­ni­a­ture and card games—I see other com­menters have also men­tioned some games.

Let’s say, for ex­am­ple, I have a pair of cards that both give me X of some­thing—let’s it deals a cer­tain amount of dam­age, for those fa­mil­iar with these games. One card gives me 4 of that some­thing. The other gives me 1-8 over a uniform ran­dom dis­tri­bu­tion—maybe a die roll.

Ex­pe­rience play­ers of these games will tell you that un­less the ran­dom card gives you a higher ex­pected value, you should play the cer­tain card. And em­piri­cal ev­i­dence would seem to sug­gest that they know what they’re talk­ing about, be­cause these are the play­ers who win games. What do they say if you ask them why? They say you can plan around the cer­tain gain.

I think that no­tion is im­por­tant here. If I have a gain that is cer­tain, at least in any of these games, I can ex­ploit it to its ful­lest po­ten­tial—for a high fi­nal util­ity. I can lure my op­po­nent into a trap be­cause I know I can beat them, I can make an ag­gres­sive move that only works if I deal at least four dam­age—heck, the mere abil­ity to trim down my in­for­mal Min­i­max tree is no small gain in a situ­a­tion like this.

Deal­ing 4 dam­age with­out ex­ploit­ing it has a much smaller end pay­off. And sure, I could try to ex­ploit the ran­dom effect in just the same way—I’ll get the same effect if I win my roll. But if I TRY to ex­ploit that gain and FAIL, I’ll be pun­ished severely. If you add in these val­ues it skews the de­ci­sion ma­trix quite a bit.

And none of this is to say that the gam­bling out­comes be­ing used as ex­am­ples above aren’t what they seem to be. But I’m won­der­ing if hu­mans are bad at these de­ci­sions partly be­cause the an­ces­tral en­vi­ron­ment con­tained many ex­am­ples of situ­a­tions like the one I’ve de­scribed. Try­ing to ex­ploit a hunt­ing tech­nique that MIGHT work could get you eaten by a bear—a high nega­tive util­ity hid­den in that ma­trix. And this could lead, af­ter nat­u­ral se­lec­tion, to hu­mans who ac­count for such ‘hid­den’ down­sides even when they don’t ex­ist.

• I agree that in many ex­am­ples, like sim­ple risk/​re­ward de­ci­sions shown here, cer­tainty does not give an op­tion higher util­ity. How­ever, there are situ­a­tions in which it might be ad­van­ta­geous to make a de­ci­sion that has a worse ex­pected out­come, but is more cer­tain. The ex­am­ple that comes to mind is com­plex plans that in­volve many de­ci­sions which af­fect each other. There is a com­pu­ta­tional cost as­so­ci­ated with un­cer­tainty, in mul­ti­ple pos­si­ble out­comes must be con­sid­ered in the plan; the plan “branches.” Cer­tainty sim­plifies things. As an agent with limited com­put­ing power in a situ­a­tion where there is a cost as­so­ci­ated with spend­ing time on plan­ning, this might be sig­nifi­cant.

• And the fact that situ­a­tions like that oc­curred in hu­man­ity’s evolu­tion ex­plains why hu­mans have the prefer­ence for cer­tainty that they do.

• I should have read this post be­fore re­ply­ing on the last I sup­pose! Things are a lit­tle more clear.

Hmm… well I had more writ­ten but for brevity’s sake: I sup­pose my prefer­ence sys­tem looks more like 1A>1B, 2A=2B. I don’t re­ally have a strong prefer­ence for an ex­tra 1% vs an ex­tra \$3k ei­ther way.

The pump re­ally only func­tions when it is re­peated plays; how­ever in that case I’d take 1B in­stead of 1A.

• A pump which only pumps once isn’t much of a pump. In or­der to run the Ve­gas money pump (three in­tran­si­tive bets) you need only offer hy­po­thet­i­cal al­ter­na­tives, be­cause the gam­bler’s prefer­ences are in­con­sis­tent. You can do this for­ever. But to run the Allais money pump, you’re chang­ing the preferred bet by mak­ing one out­come cer­tain. So to do it again, you’d need to re­set the sce­nario by re­mov­ing the cer­tainty some­how. The gam­bler would op­pose this, so their prefer­ences are con­sis­tent. And I think it might be helpful to phrase it as avoidance of re­gret, rather than valu­ing cer­tainty. Peo­ple have a pow­er­ful aver­sion to the an­ti­ci­pated re­gret of com­ing away with noth­ing from a sce­nario where they could have gained sub­stan­tially. There’s also in­ter­ac­tions here with loss aver­sion for the lives for­mu­la­tion: dol­lars gained vs lives lost. Writ­ing this com­ment has given me a new ap­pre­ci­a­tion for the difficul­ties of clearly and con­cisely ex­plain­ing things. I’ve rewrit­ten it a few times and I’m still not happy but I’m post­ing any­way be­cause of the­mat­i­cally ap­pro­pri­ate loss aver­sion.