http://en.wikipedia.org/wiki/Expected_utility_hypothesis#Expected_value_and_choice_under_risk—“In the presence of risky outcomes, a decision maker could use the expected value criterion as a rule of choice: higher expected value investments are simply the preferred ones. For example, suppose there is a gamble in which the probability of getting a $100 payment is 1 in 80 and the alternative, and far more likely, outcome, is getting nothing. Then the expected value of this gamble is $1.25. Given the choice between this gamble and a guaranteed payment of $1, by this simple expected value theory people would choose the $100-or-nothing gamble. However, under expected utility theory, some people would be risk averse enough to prefer the sure thing, even though it has a lower expected value, while other less risk averse people would still choose the riskier, higher-mean gamble.”
Also,
a choice: I can take a certain 1 unit of utility, or have a 1 in 10 million chance of getting 1 billion utility.
Realistic examples can make things easier to think about. Given the choice between getting a dollar for sure, or a 1 in 10 million chance of getting a guaranteed cure for cancer, which do you choose?
I deliberately do not use money here, because of confusions over non-linearity. I dislike your example because there, are, for me, qualitative differences between a cure for cancer and some amount of money. I was trying to make my example as non-emotive as possible.
IME it’s a lot easier to make these estimations if I calibrate my utils. Otherwise I’m just tossing labels around without ever dereferencing them.
If I assume, somewhat arbitrarily, that “1 unit of utility” is a just-noticeable utility difference at my current average utility… and I try to imagine what “1 billion utility” might actually be like, I have real trouble coming up with anything about which I don’t have strong emotions.
This isn’t terribly surprising, since emotions are tied pretty closely to value judgments in my brain.
http://en.wikipedia.org/wiki/Expected_utility_hypothesis#Expected_value_and_choice_under_risk—“In the presence of risky outcomes, a decision maker could use the expected value criterion as a rule of choice: higher expected value investments are simply the preferred ones. For example, suppose there is a gamble in which the probability of getting a $100 payment is 1 in 80 and the alternative, and far more likely, outcome, is getting nothing. Then the expected value of this gamble is $1.25. Given the choice between this gamble and a guaranteed payment of $1, by this simple expected value theory people would choose the $100-or-nothing gamble. However, under expected utility theory, some people would be risk averse enough to prefer the sure thing, even though it has a lower expected value, while other less risk averse people would still choose the riskier, higher-mean gamble.”
Also,
Realistic examples can make things easier to think about. Given the choice between getting a dollar for sure, or a 1 in 10 million chance of getting a guaranteed cure for cancer, which do you choose?
I deliberately do not use money here, because of confusions over non-linearity. I dislike your example because there, are, for me, qualitative differences between a cure for cancer and some amount of money. I was trying to make my example as non-emotive as possible.
IME it’s a lot easier to make these estimations if I calibrate my utils. Otherwise I’m just tossing labels around without ever dereferencing them.
If I assume, somewhat arbitrarily, that “1 unit of utility” is a just-noticeable utility difference at my current average utility… and I try to imagine what “1 billion utility” might actually be like, I have real trouble coming up with anything about which I don’t have strong emotions.
This isn’t terribly surprising, since emotions are tied pretty closely to value judgments in my brain.
Is it different for you?