# Risk aversion vs. concave utility function

In the com­ments to this post, sev­eral peo­ple in­de­pen­dently stated that be­ing risk-averse is the same as hav­ing a con­cave util­ity func­tion. There is, how­ever, a sub­tle differ­ence here. Con­sider the ex­am­ple pro­posed by one of the com­menters: an agent with a util­ity function

u = sqrt(p) utilons for p pa­per­clips.

The agent is be­ing offered a choice be­tween mak­ing a bet with a 5050 chance of re­ceiv­ing a pay­off of 9 or 25 pa­per­clips, or sim­ply re­ceiv­ing 16.5 pa­per­clips. The ex­pected pay­off of the bet is a full 92 + 252 = 17 pa­per­clips, yet its ex­pected util­ity is only 32 + 52 = 4 = sqrt(16) utilons which is less than the sqrt(16.5) utilons for the guaran­teed deal, so our agent goes for the lat­ter, los­ing 0.5 ex­pected pa­per­clips in the pro­cess. Thus, it is claimed that our agent is risk averse in that it sac­ri­fices 0.5 ex­pected pa­per­clips to get a guaran­teed pay­off.

Is this a good model for the cog­ni­tive bias of risk aver­sion? I would ar­gue that it’s not. Our agent ul­ti­mately cares about utilons, not pa­per­clips, and in the cur­rent case it does perfectly fine at ra­tio­nally max­i­miz­ing ex­pected utilons. A cog­ni­tive bias should be, in­stead, some ir­ra­tional be­hav­ior pat­tern that can be ex­ploited to take util­ity (rather than pa­per­clips) away from the agent. Con­sider now an­other agent, with the same util­ity func­tion as be­fore, but who just has this small ad­di­tional trait that it would strictly pre­fer a sure pay­off of 16 pa­per­clips to the above bet. Given our agent’s util­ity func­tion, 16 is the point of in­differ­ence, so could there be any prob­lem with his be­hav­ior? Turns out there is. For ex­am­ple, we could fol­low the post on Sav­age’s the­o­rem (see Pos­tu­late #4). If the sure pay­off of

16 pa­per­clips = 4 utilons

is strictly preferred to the bet

{P(9 pa­per­clips) = 0.5; P(25 pa­per­clips) = 0.5} = 4 utilons,

then there must also ex­ist some finite δ > 0 such that the agent must strictly pre­fer a guaran­teed 4 utilons to bet­ting on

{P(9) = 0.5 - δ; P(25) = 0.5 + δ) = 4 + 2δ utilons

- all at the loss of 2δ ex­pected utilons! This is also equiv­a­lent to our agent be­ing will­ing to pay a finite amount of pa­per­clips to sub­sti­tute the bet with the sure deal of the same ex­pected util­ity.

What we have just seen falls pretty nicely within the con­cept of a bias. Our agent has a perfectly fine util­ity func­tion, but it also has this other thing—let’s name it “risk aver­sion”—that makes the agent’s be­hav­ior fall short of be­ing perfectly ra­tio­nal, and is in­de­pen­dent of its con­cave util­ity func­tion for pa­per­clips. (Note that our agent has lin­ear util­ity for utilons, but is still will­ing to pay some amount of those to achieve cer­tainty) Can we some­how fix our agent? Let’s see if we can re­define our util­ity func­tion u’(p) in some way so that it gives us a con­sis­tent prefer­ence of

guaran­teed 16 paperclips

over the

{P(9) = 0.5; P(25) = 0.5}

bet, but we would also like to re­quest that the agent would still strictly pre­fer the bet

{P(9 + δ) = 0.5; P(25 + δ) = 0.5}

to {P(16) = 1} for some finite δ > 0, so that our agent is not in­finitely risk-averse. Can we say any­thing about this situ­a­tion? Well, if u’(p) is con­tin­u­ous, there must also ex­ist some num­ber δ′ such that 0 < δ′ < δ and our agent will be in­differ­ent be­tween {P(16) = 1} and

{P(9 + δ′) = 0.5; P(25 + δ′) = 0.5}.

And, of course, be­ing risk-averse (in the above-defined sense), our sup­pos­edly ra­tio­nal agent will pre­fer—no harm done—the guaran­teed pay­off to the bet of the same ex­pected util­ity u’… Sounds fa­mil­iar, doesn’t it?

I would like to stress again that, al­though our first agent does have a con­cave util­ity func­tion for pa­per­clips, which causes it to re­ject bets with some ex­pected pay­off of pa­per­clips to guaran­teed pay­offs of less pa­per­clips, it still max­i­mizes its ex­pected utilons, for which it has lin­ear util­ity. Our sec­ond agent, how­ever, has this ex­tra prop­erty that causes it to sac­ri­fice ex­pected utilons to achieve cer­tainty. And it turns out that with this prop­erty it is im­pos­si­ble to define a well-be­haved util­ity func­tion! There­fore it seems nat­u­ral to dis­t­in­guish be­ing ra­tio­nal with a con­cave util­ity func­tion, on the one hand, from, on the other hand, be­ing risk-averse and not be­ing able to have a well-be­haved util­ity func­tion at all. The lat­ter case seems much more sub­tle at the first sight, but causes a more fun­da­men­tal kind of prob­lem. Which is why I feel that a clear, even if minor, dis­tinc­tion be­tween the two situ­a­tions is still worth mak­ing ex­plicit.

A ra­tio­nal agent can have a con­cave util­ity func­tion. A risk-averse agent can not be ra­tio­nal.

(Of course, even in the first case the ques­tion of whether we want a con­cave util­ity func­tion is still open.)

• 31 Jan 2012 14:43 UTC
4 points

We ought to have two differ­ent terms for ‘con­cav­ity of util­ity func­tion’ and ‘Allais-para­dox-like be­havi­our’; hav­ing risk-ad­verse mean­ing both is too likely to lead to con­fu­sion.

• Con­cav­ity of util­ity func­tion = diminish­ing marginal util­ity.

Edit: That should prob­a­bly be con­vex­ity, but you should also have said con­vex­ity.

• (I usu­ally spec­ify whether I mean con­cave up­wards or con­cave down­wards be­cause I can never re­mem­ber the stan­dard mean­ing of con­vave by it­self...)

• Your claim that a risk-averse agent can­not be ra­tio­nal is triv­ially true be­cause it is purely cir­cu­lar.

You’ve defined a risk-averse agent as some­one who does not max­i­mize their ex­pected utilons. The mean­ing of “ra­tio­nal” around these parts is, “max­i­mizes ex­pected utilons.” The fact that you took a cir­cuitous route to make this point does not change the fact that it is triv­ial.

I’ll break down that point in case it’s non-ob­vi­ous. Utilons do not ex­ist in the real world—there is no method of mea­sur­ing utilons. Rather, they are a the­o­ret­i­cal con­struct you are em­ploy­ing. You’ve defined a ra­tio­nal agent as the one who max­i­mizes the amount of utilons he ac­quires. You’ve speci­fied a func­tion as to how he calcu­lates these, but the speci­fics of that func­tion are im­ma­te­rial. You’ve then shown that some­one who does not ra­tio­nally max­i­mize these utilons is not a ra­tio­nal utilon max­i­mizer.

Risk aver­sion with re­spect to pa­per clips or dol­lars is an em­piri­cal claim about the world. Risk aver­sion with re­spect to utilons is a claim about prefer­ence with re­spect to a the­o­ret­i­cal con­struct that is defined by those prefer­ences. It is not a mean­ingful dis­cuss it, be­cause the an­swer fol­lows log­i­cally from the defi­ni­tion you have cho­sen.

• I’ll break down that point in case it’s non-ob­vi­ous. Utilons do not ex­ist in the real world—there is no method of mea­sur­ing utilons.

(There is no method in the con­text of this dis­cus­sion, but figur­ing out how to “mea­sure utilons” (with re­spect to hu­mans) is part of the FAI prob­lem. If an agent doesn’t max­i­mize util­ity sug­gested by that agent’s con­struc­tion (in the same sense as hu­man prefer­ence can hope­fully be defined based on hu­mans), that would count as a failure of that agent’s ra­tio­nal­ity.)

• Risk aver­sion with re­spect to pa­per clips or dol­lars is an em­piri­cal claim about the world. Risk aver­sion with re­spect to utilons is a claim about prefer­ence with re­spect to a the­o­ret­i­cal con­struct that is defined by those prefer­ences. It is not a mean­ingful dis­cuss it, be­cause the an­swer fol­lows log­i­cally from the defi­ni­tion you have cho­sen.

And yet this was still dis­puted. Per­haps the point be­ing made is less ob­vi­ous to some oth­ers than it is to you. The same ap­plies to many posts.

• Per­haps the point be­ing made is less ob­vi­ous to some oth­ers than it is to you. The same ap­plies to many posts.

This is like a dis­mis­sive… com­pli­ment? I’m not sure how to feel!

Se­ri­ously, though, it doesn’t un­der­mine my point. This ar­ti­cle ul­ti­mately gets to the same ba­sic con­clu­sion, but does it in a very round­about way. The defi­ni­tion of “util­i­tons,” con­vert­ing out­comes into utilons elimi­nates risk-aver­sion. This ex­ten­sive dis­cus­sion ul­ti­mately makes the point that it’s ir­ra­tional to be utilon risk averse, but it doesn’t re­ally hit the big­ger point that utilon risk aver­sion is fun­da­men­tally non-sen­si­cal. The fact that peo­ple don’t re­al­ize that there’s cir­cu­lar rea­son­ing go­ing on is all the more rea­son to point out that it is hap­pen­ing.

• I dis­agree with your con­no­ta­tions. While the point is ob­vi­ous and even fol­lows log­i­cally from the premises it is not ‘cir­cu­lar’ in any mean­ingful sense. Peo­ple are still get­ting con­fused on the is­sue so ex­plain­ing it is fine.

• I don’t mean ob­vi­ous in the, “Why didn’t I think of that?” sense. I mean ob­vi­ous in the triv­ial sense. When I say that it is cir­cu­lar, I don’t mean sim­ply that the con­clu­sion fol­lows log­i­cally from the premises. That is the ul­ti­mate virtue of an ar­gu­ment. What I mean is that the con­clu­sion is one of the premises. The defi­ni­tion of a ra­tio­nal per­son is one who max­i­mizes their ex­pected util­ity. There­fore, some­one who is risk-averse with re­spect to util­ity is ir­ra­tional; our defi­ni­tion of ra­tio­nal guaran­tees that this be so.

I cer­tainly see why the over­all is­sue leads to con­fu­sion and why peo­ple don’t see the prob­lem in­stantly—the lan­guage is com­plex, and the con­cept of “utilons” folds a lot of con­cepts into it­self so that it’s easy to lose track of what it re­ally means. I don’t think this post re­ally ap­pre­ci­ates this is­sue, and it seems to me to be the deep­est prob­lem with this dis­cus­sion. It reads like it is an­a­lyz­ing an ac­tual prob­lem, rather than un­pack­ing an ar­gu­ment to show how it is cir­cu­lar, and I think the lat­ter is the best de­scrip­tion of the ac­tual prob­lem.

In other words, the ar­ti­cle makes it easy to walk away with­out re­al­iz­ing that it is im­pos­si­ble for a ra­tio­nal per­son to be risk averse to­wards util­ity be­cause it con­tra­dicts what we mean by “ra­tio­nal per­son.” That seems like the key is­sue here to me.

• I don’t mean ob­vi­ous in the, “Why didn’t I think of that?” sense. I mean ob­vi­ous in the triv­ial sense. When I say that it is cir­cu­lar, I don’t mean sim­ply that the con­clu­sion fol­lows log­i­cally from the premises.

And, for the sake of clar­ity, I have ex­pressed dis­agree­ment with this po­si­tion.

For what it’s worth I don’t nec­es­sar­ily agree with the post in full—I just don’t ap­ply this par­tic­u­lar re­jec­tion.

• Is your agent a hu­man be­ing (or some other an­i­mal, as op­posed to some ar­tifi­cial crea­ture that has been cre­ated speci­fi­cally to be ra­tio­nal? If it is, then you should dis­t­in­guish be­tween two differ­ent util­ities of the same lot­tery when the draw­ing is in the fu­ture:

1) The ex­pected util­ity af­ter the drawing

2) The util­ity (ac­tual, not ex­pected) of hav­ing the draw­ing in your future

The sec­ond is in­fluenced by the first, but also by the emo­tions and any other ex­pe­riences that are caused by be­liefs about the lot­tery. This post deals very well with the first, but ig­nores the sec­ond.

• Ok, I fi­nally get it: nyan_sand­wich and you are us­ing risk aver­sion in in the com­mon way used to de­scribe why some­one is un­will­ing to risk \$50 and/​or cer­tainty effects, not in the way stan­dard to economists. If some­one takes an ir­ra­tional ac­tion and tries to jus­tify it by cit­ing risk aver­sion, should we adopt that as the name of the bias or say that was a bad jus­tifi­ca­tion?

Peo­ple do ex­hibit in­con­sis­tent amounts of risk aver­sion over small and large risks, but call­ing that “risk aver­sion” seems mis­placed. We know it’s in­con­sis­tent to be scared to fly and feel fine rid­ing in a car, but we wouldn’t call that “bias against death” or a “cau­tious bias”. I feel you are do­ing some­thing analo­gous here.

• 31 Jan 2012 6:54 UTC
2 points

Man, I chose risk aver­sion as an ex­am­ple I thought would be un­con­tro­ver­sially ac­cepted as a bias. Oh well...

• Man, I chose risk aver­sion as an ex­am­ple I thought would be un­con­tro­ver­sially ac­cepted as a bias. Oh well...

It is un­con­tro­ver­sially a bias away from ex­pected util­ity max­imi­sa­tion. (I have a post in mind ex­plor­ing why the ‘ex­pected’ part of util­ity max­imi­sa­tion is not ob­vi­ously a cor­rect state of be­ing for re­lated rea­sons.)

• It is un­con­tro­ver­sially a bias away from ex­pected util­ity max­imi­sa­tion.

No it’s not; risk aver­sion is a prop­erty of util­ity func­tions. You’re talk­ing about the cer­tainty effect.

• No it’s not; risk aver­sion is a prop­erty of util­ity func­tions. You’re talk­ing about the cer­tainty effect.

No I’m not. I’m talk­ing about the same thing Nyan is talk­ing about. That is, risk aver­sion when it comes to ac­tual util­ity—which is it­self a gen­eral bias of hu­mans. He isn’t talk­ing about diminish­ing marginal util­ity, which is the prop­erty of util­ity func­tions. Once you start be­ing risk ad­verse with re­spect to ac­tual util­ity you stop be­ing an ex­pected util­ity max­imiser and be­come a differ­ent kind of util­ity max­imiser that isn’t ob­sessed with the mean over the prob­a­bil­ity dis­tri­bu­tion.

• No I’m not. I’m talk­ing about the same thing Nyan is talk­ing about.

nyan_sand­wich mis­la­beled their dis­cus­sion, which ap­pears to be the source of much of the con­tro­versy. If you want to talk about min­i­max, talk about min­i­max, don’t use an­other term that has an es­tab­lished mean­ing.

That is, risk aver­sion when it comes to ac­tual util­ity—which is it­self a gen­eral bias of hu­mans.

The only gen­eral bias I’ve heard of that’s close to this is the cer­tainty effect. If there’s an­other one I haven’t heard of, I would greatly ap­pre­ci­ate hear­ing about it.

• nyan_sand­wich mis­la­beled their discussion

Sorry guys.

The only gen­eral bias I’ve heard of that’s close to this is the cer­tainty effect. If there’s an­other one I haven’t heard of, I would greatly ap­pre­ci­ate hear­ing about it.

I don’t think it’s all the cer­tainty effect. The bias that peo­ple seem to have can usu­ally be mod­eled by a non­lin­ear util­ity func­tion, but isn’t it still there in cases where it’s un­der­stood that util­ity is lin­ear (lives saved, char­ity dol­lars, etc)?

• but isn’t it still there in cases where it’s un­der­stood that util­ity is lin­ear (lives saved, char­ity dol­lars, etc)?

Why would those be lin­ear? (i.e. who un­der­stands that?)

Utility func­tions are de­scrip­tive; they map from ex­pected out­comes to ac­tions. You mea­sure them by de­ter­min­ing what ac­tions peo­ple take in par­tic­u­lar situ­a­tions.

Con­sider scope in­sen­si­tivity. It doesn’t make sense if you mea­sure util­ity as lin­ear in the num­ber of birds- aren’t 200,000 birds 100 times more valuable than 2,000 birds? It’s cer­tainly 100 times more birds, but that doesn’t tell us any­thing about value. What it tells you is that the ac­tion “donate to save birds in re­sponse to prompt” pro­vides \$80 worth of util­ity, and the num­ber of birds doesn’t look like an in­put to the func­tion.

And while scope in­sen­si­tivity re­flects a pit­fall in hu­man cog­ni­tion, it’s not clear it doesn’t serve goals. If the pri­mary benefit for a col­lege fresh­man to, say, op­pos­ing geno­cide in Dar­fur is that they sig­nal their com­pas­sion, it doesn’t re­ally mat­ter what the scale of the geno­cide in Dar­fur is. Mul­ti­ply or di­vide the num­ber of vic­tims by ten, and they’re still go­ing to slap on a “save Dar­fur” t-shirt, get the pos­i­tive re­ac­tion from that, and then move on with their lives.

Now, you may ar­gue that your util­ity func­tion should be lin­ear with re­spect to some fea­ture of re­al­ity- but that’s like say­ing your BMI should be 20. It is what­ever it is, and will take effort to change. Whether or not it’s worth the effort is, again, a ques­tion of re­vealed prefer­ences.

• Why would those be lin­ear?

Given that the scope of the prob­lem is so much larger than the in­fluence that we usu­ally have when mak­ing the calcu­la­tions here the gra­di­ent at the mar­gin is es­sen­tially lin­ear.

(i.e. who un­der­stands that?)

Most peo­ple who have read Eliezer’s posts. He has made at least one on this sub­ject.

• Given that the scope of the prob­lem is so much larger than the in­fluence that we usu­ally have when mak­ing the calcu­la­tions here the gra­di­ent at the mar­gin is es­sen­tially lin­ear.

That’s ex­actly what I would say, in way fewer words. Well said.

• nyan_sand­wich mis­la­beled their dis­cus­sion, which ap­pears to be the source of much of the con­tro­versy. If you want to talk about min­i­max, talk about min­i­max, don’t use an­other term that has an es­tab­lished mean­ing.

In the spe­cific case case of risk aver­sion he is us­ing the term cor­rectly and your sub­sti­tu­tion with the mean­ing be­hind “diminish­ing marginal util­ity” is not a helpful cor­rec­tion, it is an er­ror. Min­i­max is again re­lated but also not the cor­rect word. (I speak be­cause in Nyan’s situ­a­tion I would be frus­trated by be­ing falsely cor­rected.)

• In the spe­cific case case of risk aver­sion he is us­ing the term correctly

If you could provide ex­am­ples of this sort of us­age in the util­ity the­ory liter­a­ture or text­books, I will gladly re­tract my cor­rec­tions. I don’t re­call see­ing “risk aver­sion” used this way be­fore.

Min­i­max is again re­lated but also not the cor­rect word.

nyan_sand­wich has ed­ited their post to re­flect that min­i­max was their in­ten­tion.

• If you could provide ex­am­ples of this sort of us­age in the util­ity the­ory liter­a­ture or text­books, I will gladly re­tract my cor­rec­tions. I don’t re­call see­ing “risk aver­sion” used this way be­fore.

It is just the stan­dard us­age if ap­plied ap­pro­pri­ately to util­ity. Even the ‘cer­tainty effect’ that you men­tion is an ex­am­ple of be­ing risk ad­verse with re­spect to util­ity albeit one highly limited to a spe­cific sub­set of cases—again when the ob­ject be­ing risk is eval­u­ated in terms of util­ity.

nyan_sand­wich has ed­ited their post to re­flect that min­i­max was their in­ten­tion.

Which may ap­ply some­where in the post but in the spe­cific ap­pli­ca­tion in the con­text just wouldn’t have made sense in the sen­tence.

• Oh cool, can’t wait.

• If the sure pay­off of

16 pa­per­clips = 4 utilons

is strictly preferred to the bet

{P(9 pa­per­clips) = 0.5; P(25 pa­per­clips) = 0.5} = 4 utilons

Then you have a con­tra­dic­tion in terms, be­cause you shouldn’t have a strict prefer­ence for out­comes with the same num­ber of utilons.

The sqrt(pa­per­clips) agent should be in­differ­ent be­tween 16 pa­per­clips and {.5: 9; .5: 25} pa­per­clips. It has a strict prefer­ence for 16.5 pa­per­clips to ei­ther 16 pa­per­clips or {.5: 9; .5: 25} pa­per­clips.

Sav­age’s 4th ax­iom- the strict prefer­ence- says that in or­der for you to strictly pre­fer 16.5 pa­per­clips to 16 pa­per­clips, there has to be a differ­ence in the utilon val­ues. There is- 16.5 pa­per­clips rep­re­sents 4.06 utilons vs. only 4 for 16 pa­per­clips.

By the 4th ax­iom, we can con­struct other bets: say, {.5: 9.4; .5; 25.4}. The agent strictly prefers 16.5 pa­per­clips to that deal (which has 4.05 utilons).

• Upvoted. In my opinion, the liter­a­ture on risk-averse agents is log­i­cally con­sis­tent, and be­ing risk-averse does not im­ply ir­ra­tional­ity. I agree with Vaniver’s com­ments. Also, hu­mans are, on av­er­age*, risk averse.

*For ex­am­ple, with re­spect to mar­kets, ‘mar­ket clear­ing’ av­er­age in a Walrasian auc­tion sense.

• See also: Diminish­ing marginal util­ity of wealth can­not ex­plain risk aver­sion. Which I found in the com­ment here: http://​​less­wrong.com/​​lw/​​15f/​​mis­lead­ing_the_wit­ness/​​11ad but I think I read in an­other thread on less­wrong which I can’t find at the moment

1. As for me, one of the main rea­sons I wouldn’t take a bet win­ning \$110 or los­ing \$100 is that I would take the ex­is­tence of some­one will­ing to offer such a bet as ev­i­dence that there’s some­thing about the coin to be flipped that they know and I don’t; if such a bet was im­ple­mented in a way that’s very hard for ei­ther part­ner to game (e.g. get­ting one ran­dom bit from ran­dom.org with both of us look­ing at the com­puter) I’d likely take it, but I don’t an­ti­ci­pate be­ing offered such a bet in the fore­see­able fu­ture.

2. I think some of the re­fused bets on the right-hand column of the table on Page 3 of that pa­per are not as ab­surd as Rabin thinks—Eliezer (IIRC) pointed out that there are quite a few peo­ple who would choose a 100% chance of re­ceiv­ing \$500 to a 10% chance of re­ceiv­ing \$1 mil­lion. (I’m not sure whether I’d ac­cept some of those bets my­self.)

This is not to say that hu­man prefer­ences can always be de­scribed by a util­ity func­tion (see the Allais para­dox), but I don’t think Rabin’s is suffi­cient ev­i­dence that they don’t.

• As for me, one of the main rea­sons I wouldn’t take a bet win­ning \$110 or los­ing \$100 is that I would take the ex­is­tence of some­one will­ing to offer such a bet as ev­i­dence that there’s some­thing about the coin to be flipped that they know and I don’t

This seems to fol­low the no-trade the­o­rem for zero-sum games.

• Won’t you get be­hav­ior prac­ti­cally in­dis­t­in­guish­able from the ‘slightly risk averse agent with sqrt(x) util­ity func­tion’ by sim­ply us­ing x^0.499 as util­ity func­tion?

Also, by the way. The re­sult­ing fi­nal util­ity func­tion for any sort of vari­able needs not be smooth, monotonously grow­ing, or in­ex­pen­sive to calcu­late.

Con­sider my util­ity func­tion for food ob­tained by me right now. Slightly more than is op­ti­mal for me to eat be­fore it spoils, in the sum­mer with­out fridge, would give no ex­tra util­ity what so ever over the right amount; or re­sults in the dis-util­ity (more trash). A lot more may make it worth it to in­vite a friend for din­ner and util­ity starts grow­ing again.

Essen­tially the util­ity peaks then starts go­ing down, then at some not very well defined point util­ity sud­denly starts grow­ing again.

There can be all sorts of re­ally odd look­ing ‘ir­ra­tional’ heuris­tics that work as a bet­ter sub­sti­tute for true util­ity func­tion which is ex­pen­sive to calcu­late (but is known to fol­low cer­tain bro­ken line pat­tern), than some prac­ti­cal to com­pute util­ity func­tion.

WRT util­ity of ex­tra money… money them­selves are worth noth­ing, it’s the changes to your life you can make with them, that mat­ter. As it is, I would take 10% shot at 10 mil­lions \$ over 100 000 for cer­tain; 15 years ago I would take 10 000 for cer­tain over 10% shot at 10 mil­lion (of course in the lat­ter case it ought to be pos­si­ble to part­ner up with some­one who has big cap­i­tal to get say 800 000 for cer­tain).

Ul­ti­mately, at­tach­ing util­ity func­tions to stuff is like con­sid­er­ing a fairly bad chess AI that just sums val­ues of pieces and po­si­tional fea­tures per­haps. That sort of AI, run­ning on same hard­ware, is go­ing to lose big time to AIs with more clever board eval­u­a­tion than that.

• Upvoted since it’s (to me) a very in­ter­est­ing topic, even if I dis­agree with your con­clu­sion.

In short my the­sis is : tak­ing a risk de­creases your knowl­edge of the world, and there­fore your abil­ity to op­ti­mize un­til you know if you won or lost your bet. But ex­plain­ing it in de­tails grew so much that I made a new ar­ti­cle about it.

• Your ar­ti­cle is so long I haven’t read it yet. This sum­mary is enough for me tho.

tak­ing a risk de­creases your knowl­edge of the world, and there­fore your abil­ity to op­ti­mize un­til you know if you won or lost your bet.

This is a very good point.

• Upvoted. This makes your com­ment on the other thread much clearer to me, and I ap­pre­ci­ate it.

• I haven’t read all of this post or the one it’s a re­sponse to, but it looks like one could re­solve the con­fu­sion here by talk­ing ex­plic­itly about ei­ther “risk aver­sion with re­spect to out­come mea­sures” or “risk aver­sion with re­spect to util­ity it­self”.