Risk aversion vs. concave utility function

In the com­ments to this post, sev­eral peo­ple in­de­pen­dently stated that be­ing risk-averse is the same as hav­ing a con­cave util­ity func­tion. There is, how­ever, a sub­tle differ­ence here. Con­sider the ex­am­ple pro­posed by one of the com­menters: an agent with a util­ity function

u = sqrt(p) utilons for p pa­per­clips.

The agent is be­ing offered a choice be­tween mak­ing a bet with a 5050 chance of re­ceiv­ing a pay­off of 9 or 25 pa­per­clips, or sim­ply re­ceiv­ing 16.5 pa­per­clips. The ex­pected pay­off of the bet is a full 92 + 252 = 17 pa­per­clips, yet its ex­pected util­ity is only 32 + 52 = 4 = sqrt(16) utilons which is less than the sqrt(16.5) utilons for the guaran­teed deal, so our agent goes for the lat­ter, los­ing 0.5 ex­pected pa­per­clips in the pro­cess. Thus, it is claimed that our agent is risk averse in that it sac­ri­fices 0.5 ex­pected pa­per­clips to get a guaran­teed pay­off.

Is this a good model for the cog­ni­tive bias of risk aver­sion? I would ar­gue that it’s not. Our agent ul­ti­mately cares about utilons, not pa­per­clips, and in the cur­rent case it does perfectly fine at ra­tio­nally max­i­miz­ing ex­pected utilons. A cog­ni­tive bias should be, in­stead, some ir­ra­tional be­hav­ior pat­tern that can be ex­ploited to take util­ity (rather than pa­per­clips) away from the agent. Con­sider now an­other agent, with the same util­ity func­tion as be­fore, but who just has this small ad­di­tional trait that it would strictly pre­fer a sure pay­off of 16 pa­per­clips to the above bet. Given our agent’s util­ity func­tion, 16 is the point of in­differ­ence, so could there be any prob­lem with his be­hav­ior? Turns out there is. For ex­am­ple, we could fol­low the post on Sav­age’s the­o­rem (see Pos­tu­late #4). If the sure pay­off of

16 pa­per­clips = 4 utilons

is strictly preferred to the bet

{P(9 pa­per­clips) = 0.5; P(25 pa­per­clips) = 0.5} = 4 utilons,

then there must also ex­ist some finite δ > 0 such that the agent must strictly pre­fer a guaran­teed 4 utilons to bet­ting on

{P(9) = 0.5 - δ; P(25) = 0.5 + δ) = 4 + 2δ utilons

- all at the loss of 2δ ex­pected utilons! This is also equiv­a­lent to our agent be­ing will­ing to pay a finite amount of pa­per­clips to sub­sti­tute the bet with the sure deal of the same ex­pected util­ity.

What we have just seen falls pretty nicely within the con­cept of a bias. Our agent has a perfectly fine util­ity func­tion, but it also has this other thing—let’s name it “risk aver­sion”—that makes the agent’s be­hav­ior fall short of be­ing perfectly ra­tio­nal, and is in­de­pen­dent of its con­cave util­ity func­tion for pa­per­clips. (Note that our agent has lin­ear util­ity for utilons, but is still will­ing to pay some amount of those to achieve cer­tainty) Can we some­how fix our agent? Let’s see if we can re­define our util­ity func­tion u’(p) in some way so that it gives us a con­sis­tent prefer­ence of

guaran­teed 16 paperclips

over the

{P(9) = 0.5; P(25) = 0.5}

bet, but we would also like to re­quest that the agent would still strictly pre­fer the bet

{P(9 + δ) = 0.5; P(25 + δ) = 0.5}

to {P(16) = 1} for some finite δ > 0, so that our agent is not in­finitely risk-averse. Can we say any­thing about this situ­a­tion? Well, if u’(p) is con­tin­u­ous, there must also ex­ist some num­ber δ′ such that 0 < δ′ < δ and our agent will be in­differ­ent be­tween {P(16) = 1} and

{P(9 + δ′) = 0.5; P(25 + δ′) = 0.5}.

And, of course, be­ing risk-averse (in the above-defined sense), our sup­pos­edly ra­tio­nal agent will pre­fer—no harm done—the guaran­teed pay­off to the bet of the same ex­pected util­ity u’… Sounds fa­mil­iar, doesn’t it?

I would like to stress again that, al­though our first agent does have a con­cave util­ity func­tion for pa­per­clips, which causes it to re­ject bets with some ex­pected pay­off of pa­per­clips to guaran­teed pay­offs of less pa­per­clips, it still max­i­mizes its ex­pected utilons, for which it has lin­ear util­ity. Our sec­ond agent, how­ever, has this ex­tra prop­erty that causes it to sac­ri­fice ex­pected utilons to achieve cer­tainty. And it turns out that with this prop­erty it is im­pos­si­ble to define a well-be­haved util­ity func­tion! There­fore it seems nat­u­ral to dis­t­in­guish be­ing ra­tio­nal with a con­cave util­ity func­tion, on the one hand, from, on the other hand, be­ing risk-averse and not be­ing able to have a well-be­haved util­ity func­tion at all. The lat­ter case seems much more sub­tle at the first sight, but causes a more fun­da­men­tal kind of prob­lem. Which is why I feel that a clear, even if minor, dis­tinc­tion be­tween the two situ­a­tions is still worth mak­ing ex­plicit.

A ra­tio­nal agent can have a con­cave util­ity func­tion. A risk-averse agent can not be ra­tio­nal.

(Of course, even in the first case the ques­tion of whether we want a con­cave util­ity func­tion is still open.)