I found this exercise surprising and useful. Supposing we accept the standard model that our utility is logarithmic in money. Let’s suppose we’re paid $100,000 a year, and somewhat arbitrarily use that as the baseline for our utility calculations. We go out for a meal with 10 people where each spends $20 on food. At the end of the meal, we can either all put in $20 or we can randomize it and have one person pay $200. All other things being equal, how much should we be prepared to pay to avoid randomization?
Take a guess at the rough order of magnitude. Then look at this short Python program until you’re happy that it’s calculating the amount that you were trying to estimate, and then run it to see how accurate your estimate was.
from math import exp, log
w = 100000
b = 20
k = 10
print w - b - exp(log(w-k*b)/k + log(w)*(1-1.0/k))
Incidentally I discovered this while working out the (trivial) formula for an approximation to this following conversations with Paul Christiano and Benja Fallenstein.
EDITED TO ADD: If you liked this, check out Expectorant by Bethany Soule of Beeminder fame.
Within the expected-utility framework, the only explanation for risk aversion is that the utility function for wealth is concave: A person has lower marginal utility for additional wealth when she is wealthy than when she is poor. This paper provides a theorem showing that expected-utility theory is an utterly implausible explanation for apprecia-
ble risk aversion over modest stakes: Within expected-utility theory, for any concave utility function, even very little risk aversion over modest stakes implies an absurd degree of risk aversion over large stakes. Illustrative calibrations are provided
The paper, or my comment? I interpreted the paper as an attack on (explanatory) models of risk aversion that are based on this (quite general) type of utility curve, with the conclusion that observed behavior can’t be motivated by such a curve.
Improper measurements. You can’t compare a time-based number (annual income) to a one-time decision (lump sum of $0, $20 or $200). W should be something like “expected future lifetime spending” in order for this to be a reasonable risk-preference calculation. Further comments assume these are quite poor folks who’ll only ever consume $100k in the rest of their lives.
I guessed high by about 5x. The choice is even more trivial than I thought. I will continue happily playing credit card roulette :)
I agree that measuring by annual income isn’t really legit, but I never know what figure to use here, and it seemed at least like a reasonable lower bound.
This seems to disregard time preferences. Losing $200 now hurts a lot more than the joy of earning $200 over the course of the following year.
If I set w to “amount currently in my checking account that I consider available for random impulse buys”—say $400 - then I get an answer that’s almost exactly in line with my intuition.
The key idea to use here is linearization: even if a function is substantially non-linear overall, it is usually going to be linear in the small region around a point. Since the meal price is small compared to income, utility is just a linear function in that region.
If you haven’t done the math in a while, it’s a good exercise to see that when you do the Taylor approximations for the two functions, they are equal to first order.
Right, it was by using the Taylor series around the mean I got the approximation based on the variance. Paul pointed out to me you could do this and presumably did this calculation himself already
Sorry about that. I’ve applied the fix that Douglas_Knight suggests below, it shouldn’t ask you to download now. The formula is simple enough that it’s almost a spoiler for the main challenge so I wanted to hide it behind a link.
I found this exercise surprising and useful. Supposing we accept the standard model that our utility is logarithmic in money. Let’s suppose we’re paid $100,000 a year, and somewhat arbitrarily use that as the baseline for our utility calculations. We go out for a meal with 10 people where each spends $20 on food. At the end of the meal, we can either all put in $20 or we can randomize it and have one person pay $200. All other things being equal, how much should we be prepared to pay to avoid randomization?
Take a guess at the rough order of magnitude. Then look at this short Python program until you’re happy that it’s calculating the amount that you were trying to estimate, and then run it to see how accurate your estimate was.
Incidentally I discovered this while working out the (trivial) formula for an approximation to this following conversations with Paul Christiano and Benja Fallenstein.
EDITED TO ADD: If you liked this, check out Expectorant by Bethany Soule of Beeminder fame.
Conversly, if you’d pay much more than this, you are absurdly risk averse: Here’s a pdf of a classic paper by Rabin: Risk Aversion and Expected-Utility Theory: A Calibration Theorem
Abstract:
This seems to make an unwarranted assumption about exactly how the marginal utility diminishes.
The paper, or my comment? I interpreted the paper as an attack on (explanatory) models of risk aversion that are based on this (quite general) type of utility curve, with the conclusion that observed behavior can’t be motivated by such a curve.
This is a great example of If It’s Worth Doing, It’s Worth Doing With Made-Up Statistics. If the assumption is your True Rejection, it’s worth playing around with alternate models to see if you can get a different answer. The simple truth is that humans are dynamically inconsistent.
Improper measurements. You can’t compare a time-based number (annual income) to a one-time decision (lump sum of $0, $20 or $200). W should be something like “expected future lifetime spending” in order for this to be a reasonable risk-preference calculation. Further comments assume these are quite poor folks who’ll only ever consume $100k in the rest of their lives.
I guessed high by about 5x. The choice is even more trivial than I thought. I will continue happily playing credit card roulette :)
I agree that measuring by annual income isn’t really legit, but I never know what figure to use here, and it seemed at least like a reasonable lower bound.
Just say that you have that much money. Or specify that you do this once a year and you don’t save money between years.
Shouldn’t your link have “latex” in place of “download”?
This seems to disregard time preferences. Losing $200 now hurts a lot more than the joy of earning $200 over the course of the following year.
If I set w to “amount currently in my checking account that I consider available for random impulse buys”—say $400 - then I get an answer that’s almost exactly in line with my intuition.
The key idea to use here is linearization: even if a function is substantially non-linear overall, it is usually going to be linear in the small region around a point. Since the meal price is small compared to income, utility is just a linear function in that region.
If you haven’t done the math in a while, it’s a good exercise to see that when you do the Taylor approximations for the two functions, they are equal to first order.
Right, it was by using the Taylor series around the mean I got the approximation based on the variance. Paul pointed out to me you could do this and presumably did this calculation himself already
I tried to follow the link and it asked me to download something. Can you post the formula?
Sorry about that. I’ve applied the fix that Douglas_Knight suggests below, it shouldn’t ask you to download now. The formula is simple enough that it’s almost a spoiler for the main challenge so I wanted to hide it behind a link.
It still asked me to download.
Looks like I reverted it somehow, don’t know how that happened sorry! It now displays for me.
I got within 10% of the correct answer!
Yeah, people often run arguments like this without actually considering the magnitude.