# Allais Malaise

Con­tinu­a­tion of: The Allais Para­dox, Zut Allais!

Judg­ing by the com­ments on Zut Allais, I failed to em­pha­size the points that needed em­pha­sis.

The prob­lem with the Allais Para­dox is the in­co­her­ent pat­tern 1A > 1B, 2B > 2A. If you need \$24,000 for a life­sav­ing op­er­a­tion and an ex­tra \$3,000 won’t help that much, then you choose 1A > 1B and 2A > 2B. If you have a mil­lion dol­lars in the bank ac­count and your util­ity curve doesn’t change much with an ex­tra \$25,000 or so, then you should choose 1B > 1A and 2B > 2A. Nei­ther the in­di­vi­d­ual choice 1A > 1B, nor the in­di­vi­d­ual choice 2B > 2A, are of them­selves ir­ra­tional. It’s the com­bi­na­tion that’s the prob­lem.

Ex­pected util­ity is not ex­pected dol­lars. In the case above, the util­ity-dis­tance from \$24,000 to \$27,000 is a tiny frac­tion of the dis­tance from \$21,000 to \$24,000. So, as stated, you should choose 1A > 1B and 2A > 2B, a quite co­her­ent com­bi­na­tion. The Allais Para­dox has noth­ing to do with be­liev­ing that ev­ery added dol­lar is equally use­ful. That idea has been re­jected since the dawn of de­ci­sion the­ory.

If satis­fy­ing your in­tu­itions is more im­por­tant to you than money, do what­ever the heck you want. Drop the money over Ni­a­gara falls. Blow it all on ex­pen­sive cham­pagne. Set fire to your hair. What­ever. If the largest util­ity you care about is the util­ity of feel­ing good about your de­ci­sion, then any de­ci­sion that feels good is the right one. If you say that differ­ent tra­jec­to­ries to the same out­come “mat­ter emo­tion­ally”, then you’re at­tach­ing an in­her­ent util­ity to con­form­ing to the brain’s na­tive method of op­ti­miza­tion, whether or not it ac­tu­ally op­ti­mizes. Heck, run­ning around in cir­cles from prefer­ence re­ver­sals could feel re­ally good too. But if you care enough about the stakes that win­ning is more im­por­tant than your brain’s good feel­ings about an in­tu­ition-con­form­ing strat­egy, then use de­ci­sion the­ory.

If you sup­pose the prob­lem is differ­ent from the one pre­sented - that the gam­bles are un­trust­wor­thy and that, af­ter this mis­trust is taken into ac­count, the pay­off prob­a­bil­ities are not as de­scribed - then, ob­vi­ously, you can make the an­swer any­thing you want.

Let’s say you’re dy­ing of thirst, you only have \$1.00, and you have to choose be­tween a vend­ing ma­chine that dis­penses a drink with cer­tainty for \$0.90, ver­sus spend­ing \$0.75 on a vend­ing ma­chine that dis­penses a drink with 99% prob­a­bil­ity. Here, the 1% chance of dy­ing is worth more to you than \$0.15, so you would pay the ex­tra fif­teen cents. You would also pay the ex­tra fif­teen cents if the two vend­ing ma­chines dis­pensed drinks with 75% prob­a­bil­ity and 74% prob­a­bil­ity re­spec­tively. The 1% prob­a­bil­ity is worth the same amount whether or not it’s the last in­cre­ment to­wards cer­tainty. This pat­tern of de­ci­sions is perfectly co­her­ent. Don’t con­fuse be­ing ra­tio­nal with be­ing short­sighted or greedy.

Added: A 50% prob­a­bil­ity of \$30K and a 50% prob­a­bil­ity of \$20K, is not the same as a 50% prob­a­bil­ity of \$26K and a 50% prob­a­bil­ity of \$24K. If your util­ity is log­a­r­ith­mic in money (the stan­dard as­sump­tion) then you will definitely pre­fer the lat­ter to the former: 0.5 log(30) + 0.5 log(20) < 0.5 log(26) + 0.5 log(24). You take the ex­pec­ta­tion of the util­ity of the money, not the util­ity of the ex­pec­ta­tion of the money.

• Seemed rea­son­able to me.

Of course, maybe you oughtn’t try to con­vinve peo­ple too hard of this… in­stead take their pen­nies. ;)

Oh, a bit off topic, but on the sub­ject of co­her­ence/​dutch book/​vulnura­bil­ity ar­gu­ments, I like them be­cause:

1. Depend­ing on for­mu­la­tion, they’ll give you epistemic prob­a­bil­ity and de­ci­sion the­ory all at once.

2. Has a “math­e­mat­i­cal karma” fla­vor. ie “no, you’re not good or evil or any­thing for listen­ing or ig­nor­ing this. Sim­ply that there’re nat­u­ral math­e­mat­i­cal con­se­quences if you don’t or­ga­nize your de­ci­sions and be­liefs in terms of these prin­ci­ples.” Just a bit of a differ­ent fla­vor than other types of math I’ve seen. And I like say­ing “math­e­mat­i­cal karma.” :)

3. The ar­gu­ments of these sorts that I’ve seen don’t seem to de­mand ever much more than lin­ear algeabra. Cox’s the­o­rem in­v­oles some­what tougher math and the deriva­tions are a bit longer. It’s use­ful to know that it’s there, but co­her­ence ar­gu­ments seem to be math­e­mat­i­cally, well, “cleaner” and also more in­tu­itive, at least to me.

• (sighs)

If you ac­tu­ally had to ex­plain all of this to Over­com­ing Bias read­ers, I shud­der to think of how some pub­lish­ing bu­reau­crat would re­act to a book on ra­tio­nal­ity. “What do you mean, hu­mans aren’t ra­tio­nal? Haven’t you ever heard of Adam Smith?”

• Do I de­tect a hint of ir­ri­ta­tion? ;-)

I have a ques­tion though. Are you able to use prob­a­bil­ity math in all your own de­ci­sions—even quick, ca­sual ones? Are you able to “feel” the Bayesian an­swer?

I sup­pose what I’m grop­ing to­wards here is: can na­tive in­tu­itions be re­placed with equally fast, but ac­cu­rate ones? It would seem a waste to have to run calcu­la­tions in the slow­est part of our brains.

• Ju­lian,

When hun­dreds or thou­sands of dol­lars are at stake, e.g. in Eliezer’s ex­am­ple, or when set­ting a long-term policy (a key point) for your­self about whether to buy ex­pen­sive store war­ranties for per­sonal elec­tron­ics, tak­ing a cou­ple of min­utes to work out the math will have a fan­tas­tic cost:benefit ra­tio. If you’re mak­ing de­ci­sions about in­vest­ments or med­i­cal care the stakes will be much higher. Peo­ple do in fact go to ab­surd lengths to avoid even sim­ple men­tal ar­ith­metic, but you can’t jus­tify the be­hav­ior based on the time costs of calcu­la­tion.

• I think psy-kosh’s “karma” idea is worth con­sid­er­ing, but your rhetoric is much bet­ter here than the pre­vi­ous two at­tempts, as far as I’m con­cerned. It’s im­por­tant—es­pe­cially for a lay au­di­ence like me that doesn’t already know what kind of ar­gu­ment you’re try­ing to make—to dis­t­in­guish be­tween con­tin­gent ad­vice and ab­solute im­per­a­tives. (It may be that the sec­ond cat­e­gory can prop­erly never be demon­strated, but a lot of peo­ple make those kinds of claims any­way, so it’s a poor in­ter­pre­tive strat­egy to as­sume that that’s not what peo­ple are say­ing.)

• “Let’s say you’re dy­ing of thirst, you only have \$1.00, and you have to choose be­tween a vend­ing ma­chine that dis­penses a drink with cer­tainty for \$0.90, ver­sus spend­ing \$0.75 on a vend­ing ma­chine that dis­penses a drink with 99% prob­a­bil­ity. Here, the 1% chance of dy­ing is worth more to you than \$0.15, so you would pay the ex­tra fif­teen cents. You would also pay the ex­tra fif­teen cents if the two vend­ing ma­chines dis­pensed drinks with 75% prob­a­bil­ity and 74% prob­a­bil­ity re­spec­tively. The 1% prob­a­bil­ity is worth the same amount whether or not it’s the last in­cre­ment to­wards cer­tainty.”

OK, the benefit of a 1% chance of sur­viv­ing with \$0.10 in my pocket is the same re­gard­less of whether I move from 99% to 100% or from 74% to 75%. How­ever, the costs differ: in the first case I lose (U(\$0.25)-U(\$0.10))0.99, while for the sec­ond I lose (U(\$0.25)-U(\$0.10))0.74.

• I also no­ticed that and was won­der­ing how many com­ments it would take be­fore some­body nit­picks this fairly triv­ial point.

It was 6 ;)

• @ Carl Shulman

I avoid men­tal ar­ith­metic be­cause I tend to drop dec­i­mals, mis­re­mem­ber the rules of alge­bra, and other heinous sins. It’s on my list to get bet­ter at, but right now I can’t trust even sim­ple sums I do in my head.

• Ju­lian,

What about cell phone and pocket calcu­la­tors? Microsoft Ex­cel can let you or­ga­nize your data nicely for ex­pected value and net pre­sent value calcu­la­tions for in­ter­net pur­chases, de­ci­sions on in­surance, etc. There’s no shame in us­ing ar­ith­metic aids, just as we use sup­ple­men­tary mem­ory to re­mem­ber tele­phone num­bers and the like.

• “I avoid men­tal ar­ith­metic be­cause I tend to drop dec­i­mals, mis­re­mem­ber the rules of alge­bra, and other heinous sins”

Fast and ac­cu­rate ar­ith­metic can trained with any num­ber of soft­ware pack­ages. Why not give it a try? I did and it worked for me.

• Eliezer_Yud­kowsky: AFAICT, and cor­rect me if I’m wrong, you didn’t ad­dress Gray_Area’s ob­jec­tion on the first post on this topic, which re­solved the in­con­sis­tency to my satis­fac­tion.

Speci­fi­cally, the sub­jects are pre­sented with a one shot choice be­tween the bets. You are read­ing “I pre­fer A to B” to mean “I will write an in­finite num­ber of Amer­i­can op­tions to con­vert B’s into A’s” even when that doesn’t ob­vi­ously fol­low from the choices pre­sented. Once it be­comes an ar­bi­trar­ily re­peat­able or re­peated game, Gray_Area’s ar­gu­ment goes, they will re­vert to ex­pected mon­e­tary value max­i­miza­tion.

And in fact, that’s ex­actly what they did.

If some­one has seen that point ad­dressed, please link the com­ment or post and I will apol­o­gize. Here is the rele­vant part of Gray_Area’s post, just to save you time:

″...the ‘money pump’ ar­gu­ment fails be­cause you are chang­ing the rules of the game. The origi­nal ques­tion was, I as­sume, ask­ing whether you would play the game once, whereas you would pre­sum­ably iter­ate the money pump un­til the pen­nies turn into mil­lions. The prob­lem, though, is if you asked peo­ple to make the origi­nal choices a mil­lion times, they would, cor­rectly, max­i­mize ex­pec­ta­tions. Be­cause when you are talk­ing about a mil­lion tries, ex­pec­ta­tions are the ap­pro­pri­ate frame­work. When you are talk­ing about 1 try, they are not.”

• I too ini­tially shared Gray_Area’s ob­jec­tion, but Eliezer did in fact ad­dress it:

If you need \$24,000 for a life­sav­ing op­er­a­tion and an ex­tra \$3,000 won’t help that much, then you choose 1A > 1B and 2A > 2B. If you have a mil­lion dol­lars in the bank ac­count and your util­ity curve doesn’t change much with an ex­tra \$25,000 or so, then you should choose 1B > 1A and 2B > 2A.

The com­ments in­tro­duc­ing the idea of a life­sav­ing op­er­a­tion ac­tu­ally clar­ified why that ob­jec­tion isn’t rea­son­able. If I need some money more than I need more money, then I should choose 1A > 1B and 2A > 2B.

• “If the largest util­ity you care about is the util­ity of feel­ing good about your de­ci­sion, then any de­ci­sion that feels good is the right one.”

I don’t think so, Eliezer. Per­haps you’ve mi­s­un­der­stood the ar­gu­ment. It isn’t nec­es­sar­ily “any de­ci­sion that feels good”, it’s any de­ci­sion that gets the de­cider what the de­cider wants. I was try­ing to raise a ques­tion about your as­sump­tions about what mat­ters. Some­times, the way you write, it seems you may not be aware that your par­tic­u­lar model of what should mat­ter to peo­ple isn’t shared by ev­ery­one.

I agree that if you think you are play­ing a par­tic­u­lar game, and you want to win that game, there may be very spe­cific things you need to do to win. Where I’m try­ing to draw your at­ten­tion is to the fact that hu­man ac­tivity en­com­passes a great num­ber of differ­ent games, si­mul­ta­neously. A re­jec­tion of the game you want to play is not the same thing as say­ing “any­thing goes.” If you are talk­ing about chess, and some­one says “Hey, I play check­ers” the proper re­sponse is not “Oh, well then it doesn’t mat­ter what move you make. You can make any move.”

It wouldn’t take very much ad­just­ment of your rhetoric to avoid wan­tonly tram­pling on the flowerbeds of al­ter­na­tive util­ity sys­tems. You can be in­ci­sive with­out be­ing mean-spir­ited.

• well that sure was a lot of bold text.

• Quite a few as­sump­tions about what peo­ple should want are be­ing bandied about here, and not much sup­port has been in­tro­duced for them.

I would like a more ex­plicit recog­ni­tion of the val­ues that are be­ing taken for granted. I sus­pect there are read­ers who feel the same way.

• I ac­tu­ally don’t mind the bold text, so much as the French word­play :-/​

(Yes, I know malaise is used in English as well. I’m mak­ing a gen­eral point.)

• OK, Eliezer, let me try to turn your ex­am­ple around. (I think) I un­der­stand—and agree with—ev­ery­thing in this post (esp. the bold­face). Nonethe­less:

As­sume your util­ity of money was lin­ear. Imag­ine two fi­nan­cial in­vest­ments (bets), 3A and 3B: 3A: 100% chance of \$24,000 3B: 50% chance of \$26,000, 50% chance of \$24,000.

Pre­sum­ably, (given a lin­ear util­ity of money), you would say that you were in­differ­ent be­tween the two bets. Yet in ac­tual fi­nan­cial in­vest­ing, in the real world, you re­ceive an (ex­pected) re­turn pre­mium for ac­cept­ing ad­di­tional risk. Roughly speak­ing, ex­pected re­turn of an in­vest­ment goes up, as the volatility of that re­turn in­creases.

It seems that I could con­struct a money pump for YOU, out of real-world in­vest­ments! You ap­pear to be mak­ing the claim that all that mat­ters for ra­tio­nal de­ci­sion-mak­ing is ex­pected value, and that volatility is not a fac­tor at all. I think you’re in­cor­rect about that, and the ac­tual be­hav­ior of real-world in­vest­ments ap­pears to sup­port my po­si­tion.

I’m not sure what the cor­rect ac­count­ing for un­cer­tainty should be in your origi­nal 1A/​1B/​2A/​2B ex­am­ple. But it sure seems like you’re sug­gest­ing that the ONLY thing that mat­ters is ex­pected value (and then some util­ity on the money out­comes) -- but nowhere in your calcu­la­tions do I see a risk pre­mium, some kind of straight­for­ward penalty for volatility of out­come.

Again, if you think that ra­tio­nal de­ci­sion-mak­ing shouldn’t use such in­for­ma­tion, then I’m cer­tain that I can find real-world in­vest­ments, where you ought to ac­cept lower re­turns for a volatile in­vest­ment than is offered by the ac­tual in­vest­ment, and I can pocket the differ­ence. A real-world money pump—on you.

Or have I com­pletely missed the point some­how?

• Ged­dis,

I think you meant for 3B to offer a 50% chance of be­ing lower than 3A and a 50% chance of be­ing higher, rather than be­ing ei­ther equal or higher?

• Don Ged­dis, see ad­den­dum above. When you start out by say­ing, “As­sume util­ity is lin­ear in money,” you beg the ques­tion.

There are three ma­jor rea­sons not to like volatility:

1) Not ev­ery added dol­lar is as use­ful as the last one. (When this rule is vi­o­lated, you like volatility: If you need \$15,000 for a life­sav­ing op­er­a­tion, you would want to dou­ble-or-noth­ing your \$10,000 at 5050 odds.)

2) Your in­vest­ment ac­tivity has a bound­ary at zero, or at minus \$10,000, or wher­ever—once you lose enough money you can no longer in­vest. If you ran­dom-walk a lin­ear graph, you will even­tu­ally hit zero. Ran­dom-walk­ing a log­a­r­ith­mic graph never hits zero. This means that the hit from \$100 to \$0 is much larger than the hit from \$200 to \$100 be­cause you have noth­ing left to in­vest.

Both of these points im­ply that util­ity is not lin­ear in money.

3) You can have op­por­tu­ni­ties to take ad­vance prepa­ra­tions for known events, which changes the ex­pected util­ity of those events. For ex­am­ple, if you know for cer­tain that you’ll get \$24,000 five years later, then you can bor­row \$18,000 to­day at 6% in­ter­est and be con­fi­dent of pay­ing back the loan. Note that this ac­tion in­duces a sharp util­ity gra­di­ent in the vicinity of \$24,000. It doesn’t gen­er­ate an Allais Para­dox, un­less the Allais pay­off is far enough in the fu­ture that you have an op­por­tu­nity to take an ad­di­tional ad­vance ac­tion in sce­nario 1 that is ab­sent in sce­nario 2.

(In­ci­den­tally, the op­por­tu­nity to take ad­di­tional ad­vance ac­tions if the Allais pay­off is far enough in the fu­ture, is by far the strongest ar­gu­ment in fa­vor of try­ing to at­tach a nor­ma­tive in­ter­pre­ta­tion to the Allais Para­dox that I can think of. And come to think of it, I don’t re­mem­ber ever hear­ing it pointed out be­fore.)

• The util­ity here is not just the value of the money re­ceived. It’s also the peace of mind know­ing that money was not lost.

As other com­ments have pointed out, it’s very im­por­tant that the game is played in a once-off way, rather than re­peat­edly. If it’s played re­peat­edly, then it does be­come a “money pump”, but the game’s dy­nam­ics are differ­ent for once-off, and in once-off games the “money pump” does not ap­ply.

If some­one needs to choose once-off be­tween 1A and 1B, they’ll usu­ally choose the 100% cer­tain op­tion not be­cause they’re be­ing ir­ra­tional, or be­ing in­con­sis­tent com­pared to the choice be­tween 2A and 2B, but be­cause the in­her­ent emo­tional feel­ing of loss from hav­ing missed out on a sub­stan­tial gain that was a sure thing is very un­pleas­ant. So, peo­ple will ra­tio­nally pay to avoid that emo­tional re­sponse.

This has to do with the make up of hu­mans. Hu­mans aren’t always ra­tio­nal—what’s more, it’s not ra­tio­nal for them not to be always ra­tio­nal. You should be well aware of this from evolu­tion­ary stud­ies.

• I said “it’s not ra­tio­nal for them not to be always ra­tio­nal”: I meant to say: “it’s not ra­tio­nal for them to be always ra­tio­nal”.

• This is in­ter­est­ing. When I read the first post in this se­ries about Allais, I thought it was a bit dense com­pared to other writ­ing on OB. It oc­curred to me that you had vi­o­lated your own rule of aiming very, very low in ex­plain­ing things.

As it turns out, that post has gen­er­ated two more posts of re-ex­pla­na­tion, and a fair bit of con­tro­versy.

When you write that book of yours, you might want to treat these posts as a first draft, and go back to your nor­mal policy of sim­ple ex­pla­na­tions 8)

• “it’s not ra­tio­nal for them to be always ra­tio­nal”

Every­thing in mod­er­a­tion, es­pe­cially mod­er­a­tion.

• Sorry for my typo in my ex­am­ple. Of course I meant to say that 3A was 100% at \$24K, and 3B was 50%@\$26K and 50%@22K. The whole point was for the math to come out with the same ex­pected value at \$24K, just 3B has more volatility. But I think ev­ery­one got my in­tent de­spite my typo.

Eliezer of course jumped right to the key, which is the (un­re­al­is­tic) as­sump­tion of lin­ear util­ity. I was go­ing to log in this morn­ing and sug­gest that the fi­nan­cial ad­vice of “always get paid for ac­cept­ing volatility” and/​or “when­ever you can re­duce volatility while main­tain­ing ex­pected value, do so” was re­ally a rule-of-thumb sum­mary for com­mon hu­man util­ity func­tions. Which is ba­si­cally what Eliezer sug­gested in the ad­den­dum, that log util­ity + Bayes re­sults in the same fi­nan­cial ad­vice.

The ex­am­ple I was go­ing to try to sug­gest this morn­ing, in in­vest­ment the­ory, is di­ver­sifi­ca­tion. If you in­vest in a sin­gle stock that his­tor­i­cally re­turns 10% an­nu­ally, but some­times −20% and some­times +40%, it is “bet­ter” to in­stead in­vest 110 of your as­sets in 10 such (un­cor­re­lated) stocks. The ex­pected re­turn doesn’t change: it’s still 10% an­nu­ally. But the volatility drops way down. You bunch up all the prob­a­bil­ity around the ex­pected re­turn (us­ing a bas­ket of stocks), whereas with a sin­gle stock the prob­a­bil­ities are far more spread out.

But prob­a­bly you can get to this same con­clu­sion with log util­ities and Bayes.

My fi­nal ex­am­ple this morn­ing was go­ing to be on how you can use con­fi­dence to make fur­ther de­ci­sions, in be­tween the time you ac­cept the bet and the time you get the pay­off. This is true, for ex­am­ple, for tech man­agers try­ing to get a soft­ware pro­ject out. It’s far more im­por­tant how re­li­able the pro­gram­mer’s es­ti­mates are, than it is what their av­er­age pro­duc­tivity is. The over­all busi­ness can only plan (mar­ket­ing, sales, re­tail, etc) for the re­li­able parts, so the util­ity that the busi­ness sees from the volatile pro­duc­tivity is vastly lower.

But again, Eliezer an­ti­ci­pates my ob­jec­tion with his point #3 in the com­ments, about tak­ing out a loan to­day and be­ing con­fi­dent that you can pay it back in five years.

My only fi­nal ques­tion, then, is: isn’t “the op­por­tu­ni­ties to take ad­vance prepa­ra­tions” suffi­cient to re­solve the origi­nal Allais Para­dox, even for the naive bet­tors who choose the “ir­ra­tional” 1A/​2B com­bi­na­tion?

• Well, clearly, money should go alongside Quan­tum The­ory in the ‘bad ex­am­ple’ dust­bin. Money has been the root of most of these con­fu­sions.

Try re­plac­ing dol­lars with chits. The only rules with chits are that the more you have, the bet­ter it is, and you can never have too many. Their cu­mu­la­tive util­ity is fixed, i.e. you ‘get as much util­ity’ from your mil­lionth as you do from the first. Th­ese posts aren’t a dis­cus­sion on the value of money.

‘Peo­ple some­times make the ir­ra­tional de­ci­sion for ra­tio­nal rea­sons’ also misses the point. As this post says, if you want to use your own heuris­tic for de­cid­ing how to bet, go for it. If you want to max­imise your ex­pected mon­e­tary gain us­ing de­ci­sion the­ory, well, here’s how you do it.

• The last line re­ally helped me see where you are com­ing from: You take the ex­pec­ta­tion of the util­ity of the money, not the util­ity of the ex­pec­ta­tion of the money.

How­ever, at http://​​less­wrong.com/​​lw/​​hl/​​lot­ter­ies_a_waste_of_hope/​​, you ar­gue against play­ing the lotto be­cause the util­ity of the ex­pec­ta­tion it­self is very bad. Now granted, the ex­pec­ta­tion of the util­ity is also not great, but let’s say the lotto offered enough of a jack­pot (with the same long odds) to offer an ap­pro­pri­ate ex­pec­ta­tion of util­ity. Wouldn’t you still be ar­gu­ing that it is a “hope sink”, thus fo­cus­ing on the util­ity of the ex­pec­ta­tion?

• Most math­e­mat­i­cally-com­pe­tent com­menters agreed that the ex­pected util­ity of lot­ter­ies was bad. Some peo­ple dis­agreed that the util­ity of ex­pec­ta­tion was bad, though. Yud­kowsky was ar­gu­ing against these com­menters, say­ing that both ex­pected util­ity and util­ity of ex­pec­ta­tion are bad. The ar­gu­ments in the post you linked are not the main rea­sons Yud­kowsky does not play the lot­tery, but rather the ar­gu­ments that con­vey the most new in­for­ma­tion about the lot­tery (and what­ever the lot­tery is be­ing used to illus­trate).

• Steve,

The lot­tery could be a good deal and still be bad. Suc­cess­ful thieves are also so­cially harm­ful.

• Yes, but I think the point of his lot­tery ar­ti­cle was that it was a bad deal for the in­di­vi­d­ual player, and not just be­cause it had a nega­tive ex­pected value; he was mak­ing the point that the ac­tual ex­is­tence of the (slim) pos­si­bil­ity of riches was it­self harm­ful. And he was not fo­cus­ing on whether one ac­tu­ally won the lotto, he was fo­cus­ing on the util­ity of ac­tu­ally hav­ing the chance of win­ning (as op­posed to the util­ity of ac­tu­ally win­ning).

• Eliezer, I think your ar­gu­ment is flat-out in­valid.

Here is the form of your ar­gu­ment: “You pre­fer X. This does not strike peo­ple as fool­ish. But if you always pre­fer X, it would be fool­ish. There­fore your prefer­ence re­ally is fool­ish.”

That con­clu­sion does not fol­low with­out the premise “You always pre­fer X if you ever pre­fer X.”

More plainly, you are sup­pos­ing that there is some long run over which you could “pump money” from some­one who ex­pressed such-and-such a prefer­ence. BUT my prefer­ence over in­finitely many re­peated tri­als is not the same as my prefer­ence over one trial. AND You can­not demon­strate that that is ab­surd.

• To say it an­other way, Eliezer, I share your in­tu­tion that prefer­ences that look silly over re­peated tri­als are some­times to be avoided. But I think they are not always to be avoided.

This sort of in­tu­tion dis­agree­ment ex­ists in other ar­eas. Con­sider the in­tu­tion that an act, X, is im­moral if it can­not be uni­ver­sal­ized. This in­tu­tion is of­ten ar­tic­u­lated as the ob­jec­tion “But what if ev­ery­one did X?”

Some peo­ple think this ob­jec­tion has real punch. Other peo­ple do not feel the punch at all, and sim­ply re­ply, “But not ev­ery­one does X.”

Similarly, you think there is real punch in say­ing, “But what if you had that prefer­ence over re­peated deal­ings?”

I do not feel the punch, and I can only re­ply, “But I do not have that prefer­ence over re­peated deal­ings.”

• I have a few ques­tions about util­ity(hope­fully this will clear my con­fu­sion). Some­one please an­swer. Also, the fol­low­ing post con­tains math, viewer dis­cre­tion is ad­vised(the math is very sim­ple how­ever).

Sup­pose you have a choice be­tween two games...

A: 1 game of 100% chance to win \$1′000′000 B: 2 games of 50% chance to win \$1′000′000 and 50% chance to win nothing

Which is bet­ter A, B or are they equiv­a­lent? Which game would you pick? Please an­swer be­fore read­ing the rest of my ram­bling.

Lets try to calcu­late util­ity.

For A, A: Uto­tal = 100%U[\$1′000′000] + 0%U[\$0]

For B, I see two pos­si­ble ways to calcu­late it.

1)Calcu­late the util­ity for one game and mul­ti­ply it by two B-1: U1game = 50%U[\$1′000′000] + 50%U[\$0] B-1: Uto­tal = U2games = 2U1game = 2{50%U[\$1′000′000] + 50%U[\$0]}

2)Calcu­late all pos­si­ble out­comes of money pos­ses­sion af­ter 2 games. The pos­si­bil­ities are: \$0 , \$0 \$0 , \$1′000′000 \$1′000′000 , \$0 \$1′000′000 , \$1′000′000

B-2: Uto­tal = 25%U[\$0] + 25%U[\$1′000′000] + 25%U[\$1′000′000] + 25%U[\$2′000′000]

If we as­sume util­ity is lin­ear: U[\$0] = 0 U[\$1′000′000] = 1 U[\$2′000′000] = 2 A: Uto­tal = 100%[\$1′000′000] + 0%U[\$0] = 100%1 + 0%0 = 1 B-1: Uto­tal = 2{50%U[\$1′000′000] + 50%U[0]} = 2{50%1 + 50%0} = 1 B-2: Uto­tal = 25%U[\$0] + 25%U[\$1′000′000] + 25%U[\$1′000′000] + 25%U[\$2′000′000] = 25%0 + 25%1 + 25%1 + 25%2 = 1 The math is so neat!

The weird­ness be­gins when the util­ity of money is non lin­ear. \$2′000′000 isn’t twice as use­ful as \$1′000′000 (un­less we split that \$2′000′000 be­tween 2 peo­ple, but lets deal with one weird­ness at a time). With the first mil­lion one can by a house, a car, quit their crappy job and pur­sue their own in­ter­ests. The sec­ond mil­lion won’t change the per­sons’ life as much and the 3d even less.

Lets in­vent more re­al­is­tic util­ities(it has also been sug­gested that the util­ity of money is log­a­r­ith­mic but I’m hav­ing some trou­ble tak­ing the log of 0): U[\$0] = 0 U[\$1′000′000] = 1 U[\$2′000′000] = 1.1 (re­duced from 2 to 1.1)

A: Uto­tal = 100%[\$1′000′000] + 0%U[\$0] = 100%1 + 0%0 = 1 B-1: Uto­tal = 2{50%U[\$1′000′000] + 50%U[0]} = 2{50%1 + 50%0} = 1 B-2: Uto­tal = 25%U[\$0] + 25%U[\$1′000′000] + 25%U[\$1′000′000] + 25%U[\$2′000′000] = 25%0 + 25%1 + 25%1 + 25%*1.1 = 0.775

Hm­mmm… B-1 is not equal to B-2. Either I have to change around util­ity func­tion val­ues or dis­card one of them as the wrong calcu­la­tion or some other mis­take I didn’t think of. Maybe U[\$0] != 0.

Start­ing with the as­sump­tion that B-1 = B-2 (U[\$1′000′000] = 1 U[\$2′000′000] = 1.1), then 2{50%U[\$1′000′000] + 50%U[0]} = 25%U[\$0] + 25%U[\$1′000′000] + 25%U[\$1′000′000] + 25%*U[\$2′000′000]

solv­ing for U[\$0]: 2{50%1 + 50%U[0]} = 25%U[\$0] + 25%1 + 25%1 + 25%1.1 1 + U[\$0] = 0.25U[\$0] + 0.775 0.75*U[\$0] = −0.225 U[\$0] = −0.3

B-1 = B-2 = 0.7 In­tu­itively this kind of makes sense. Com­par­ing: A: 100%[\$1′000′000] = 50%U[\$1′000′000] + 50%U[\$1′000′000] to B: 25%U[\$0] + 25%U[\$1′000′000] + 25%U[\$1′000′000] + 25%U[\$2′000′000] = 50%U[\$1′000′000] + 25%U[\$0] + 25%U[\$2′000′000]

A (=/​>/​<)? B 50%U[\$1′000′000] + 50%U[\$1′000′000] (=/​>/​<)? 50%U[\$1′000′000] + 25%U[\$0] + 25%U[\$2′000′000] the first 50% is the same so it can­cels out 50%U[\$1′000′000] (=/​>/​<)? 25%U[\$0] + 25%U[\$2′000′000] 0.5 > 0.2 The chance to win 2 mil­lion doesn’t out­weigh how much it would suck to win noth­ing so there­fore the cer­tainty of 1 mil­lion is prefer­able. The nega­tive util­ity of U[\$0] is ab­sorbed by it’s 0 prob­a­bil­ity co­effi­cient in A.

Or maybe calcu­la­tion B-1 is just plain wrong, but that would mean we can­not calcu­late the util­ity of dis­crete events and add the util­ities up.

Is any of this cor­rect? What kind of calcu­la­tions would you do?

• A bird in the hand is in­deed worth 2 in the bush.

• B-1 is wrong be­cause you’re not us­ing marginal util­ity. On the sec­ond rep­e­ti­tion, U(marginal)[\$1,000,000] is ei­ther 1 or 0.1 de­pend­ing on whether you lost or won on the first play. You can still add the util­ities of events up, but the first and sec­ond plays are differ­ent events, util­ity-wise, so you can’t mul­ti­ply by 2. The cor­rect ex­pres­sion is:

(50%U(\$0) + 50%U(the first mil­lion)) + (50%U(\$0) + 50%(50%U(the first mil­lion) + 50%U(the sec­ond mil­lion)))

which comes out to .775.

• I see. Thank you Nick. I was con­fused by the idea that util­ity could be pro­por­tional. e.i. 100%U[\$1′000′000] = { 50%U[\$1′000′000] } 2 be­cause when you put 2 50%U[\$1′000′000] the util­ity was less than 100%*U[\$1′000′000]. But that was be­cause U[\$1′000′000] = U[\$1′000′000] is not always true de­pend­ing on if it’s the 1st or 2nd mil­lion. U[\$1′000′000] go­ing from \$0 to \$1′000′000 is not the same as U[\$1′000′000] go­ing from \$1′000′000 to \$2′000′000.

Back to Alais:

• 1A. 100%U[\$24,000] + 0%U[\$0]

• 1B. 3334U[\$27,000] + 134U[\$0]

• 2A. 34%U[\$24,000] + 66%U[\$0]

• 2B. 33%U[\$27,000] + 67%U[\$0]

In 1A, U[\$24,000] is go­ing from \$0 to \$24,000. In 1A, U[\$0] is go­ing from \$0 to \$0. In 1B, U[\$27,000] is go­ing from \$0 to \$27,000. In 1B, U[\$0] is go­ing from \$0 to \$0. In 2A, U[\$24,000] is go­ing from \$0 to \$24,000. In 2A, U[\$0] is go­ing from \$0 to \$0. In 2B, U[\$27,000] is go­ing from \$0 to \$27,000. In 2B, U[\$0] is go­ing from \$0 to \$0. Looks like all the vari­ables, U[money] = U[money] hold.

So if 1A > 1B then 100%U[\$24,000] + 0%U[\$0] > 3334U[\$27,000] + 134U[\$0]

• mul­ti­ply by 34% 34%( 100%U[\$24,000] + 0%U[\$0] ) > 34%( 3334U[\$27,000] + 134U[\$0] )

• add 66%U[\$0] (which makes the to­tal per­centages add up to 100%) 34%( 100%U[\$24,000] + 0%U[\$0] ) + 66%U[\$0] > 34%( 3334U[\$27,000] + 134U[\$0] ) 66%*U[\$0]

• alge­bra 34%U[\$24,000] + 0%U[\$0] + 66%U[\$0] > 33%U[\$27,000] + 1%U[\$0] + 66%U[\$0]

• more alge­bra 34%U[\$24,000] + 66%U[\$0] > 33%U[\$27,000] + 67%U[\$0]

• mean­ing 2A > 2B if 1A > 1B

/​sigh, all this time to re­dis­cover the ob­vi­ous.

• Per­haps a lot of con­fu­sion could have been avoided if the point had been stated thus:

One’s de­ci­sion should be no differ­ent even if the odds of the situ­a­tion aris­ing that re­quires the de­ci­sion are differ­ent.

Foot­note against nit­pick­ing: this ig­nores the cost of mak­ing the de­ci­sion it­self. We may choose to gather less in­for­ma­tion and not think as hard for de­ci­sions about situ­a­tions that are un­likely to arise. That fac­tor isn’t rele­vant in the ex­am­ple at hand.

• ac­tu­ally… I don’t agree with this ex­am­ple as be­ing a good ex­am­ple of in­tu­ition failing. the prob­lem is peo­ple think about this sce­nario as if it were real life. in real life there would be a de­layed pay­out. in the case of a de­layed pay­out on their “ticket” the ticket with 100% cer­tainty is more LIQUID than the ticket with the bet­ter ex­pec­ta­tion. liquidity it­self has util­ity. maybe the liquidity of the cer­tain pay­off is only due to the rest of so­ciety be­ing dumb; how­ever even if that is the case if you know the rest of so­ciety is dumb you must take that into ac­count when mak­ing your de­ci­sion. in this case the brain does not seem to be wrong and seems to ac­tu­ally be choos­ing cor­rectly. the brain is just tak­ing your ex­am­ple and adding lots of ex­tra de­tails to it to make it feel more re­al­is­tic (this is cer­tainly an un­de­sired effect for re­searchers try­ing to learn about peo­ple’s thoughts or in­ter­ests but who cares about them). the brain of­ten adds a bunch of as­sumed de­tails to a con­fus­ing situ­a­tion, this is ba­si­cally the defi­ni­tion of how in­tu­ition works. now, you have to con­sider the odds of this ex­act ex­am­ple com­ing up or the odds of the imag­ined ex­am­ple com­ing up… and how well the brain will likely han­dle each situ­a­tion… then use that in­for­ma­tion to de­ter­mine if the brain is ac­tu­ally mis­taken or not.

in the case of elec­tronic store war­ranties they usu­ally aren’t worth­while be­cause they are de­signed to not be worth­while. just like mail-in re­bates are de­signed to of­ten go unre­deemed… how­ever in the case where your per­sonal time is more valuable by far than any of the costs, it starts to make sense.

on an­other note how rich did feyn­mann or kac get? (ei­ther a ton, or not that much de­pend­ing on if they wanted to help peo­ple or take their pen­nies!)