Then E(u|πm) is within one standard deviation (using dmu) of the median value of dmu.
As the Wikipedia says, “If the distribution has finite variance”. That’s not necessarily a good assumption.
Consider a policy with three possible outcomes: one pony; two ponies; the universe is converted to paperclips. What’s the median outcome? One pony. Don’t you want a pony?
The median is a robust estimator meaning that it’s harder for outliers to screw you up. The price for that, though, is indifference to the outliers which I am not sure is advisable in the utility context.
Still—only is you true underlying distribution has finite variance. Check some plots of, say, a Cauchy distribution—it doesn’t take much of heavy tails to have no defined variance (or mean, for that matter).
You did notice that I mentioned the Cauchy distribution by name and link in the text, right?
And the Cauchy distribution is the worst possible example for defending the use of the mean—because it doesn’t have one. Not even, a la St Petersburg paradox, an infinite mean, just no mean at all. But it does have a median, exactly placed in the natural middle.
Your argument works somewhat better with one of the stable distributions with an alpha between 1 and 2. But even there, you need a non-zero beta or else median=mean! The standard deviation is an upper bound on the difference, not necessarily a sharp one.
It would be interesting to analyse the difference between mean and median for stable distributions with non-zero beta; I’ll get round to that some day. My best guess is that you could use some fractional moment to bound the difference, instead of (the square root of) the variance.
EDIT: this is indeed the case, you can use Jensen’s inequality to show that the q-th root of the q-th absolute value central moment, for 1<q<2, can be substituted as a bound between mean and moment. For q<alpha, this should be finite.
I only brought up Cauchy to show that infinite-variance distributions don’t have to be weird and funky. Show a plot of a Cauchy pdf to someone who had, like, one undergrad stats course and she’ll say something like “Yes, that’s a bell curve” X-/
Actually, there’s no need for higher central moments. The mean absolute deviation around the mean (which I would have called the first absolute central moment) bounds the difference between mean and median, and is sharper than the standard deviation.
As the Wikipedia says, “If the distribution has finite variance”. That’s not necessarily a good assumption.
Consider a policy with three possible outcomes: one pony; two ponies; the universe is converted to paperclips. What’s the median outcome? One pony. Don’t you want a pony?
The median is a robust estimator meaning that it’s harder for outliers to screw you up. The price for that, though, is indifference to the outliers which I am not sure is advisable in the utility context.
Indeed. But the argument about convergence when you get more and more options still applies.
Still—only is you true underlying distribution has finite variance. Check some plots of, say, a Cauchy distribution—it doesn’t take much of heavy tails to have no defined variance (or mean, for that matter).
Not everything converges to a Gaussian.
You did notice that I mentioned the Cauchy distribution by name and link in the text, right?
And the Cauchy distribution is the worst possible example for defending the use of the mean—because it doesn’t have one. Not even, a la St Petersburg paradox, an infinite mean, just no mean at all. But it does have a median, exactly placed in the natural middle.
Your argument works somewhat better with one of the stable distributions with an alpha between 1 and 2. But even there, you need a non-zero beta or else median=mean! The standard deviation is an upper bound on the difference, not necessarily a sharp one.
It would be interesting to analyse the difference between mean and median for stable distributions with non-zero beta; I’ll get round to that some day. My best guess is that you could use some fractional moment to bound the difference, instead of (the square root of) the variance.
EDIT: this is indeed the case, you can use Jensen’s inequality to show that the q-th root of the q-th absolute value central moment, for 1<q<2, can be substituted as a bound between mean and moment. For q<alpha, this should be finite.
I only brought up Cauchy to show that infinite-variance distributions don’t have to be weird and funky. Show a plot of a Cauchy pdf to someone who had, like, one undergrad stats course and she’ll say something like “Yes, that’s a bell curve” X-/
Actually, there’s no need for higher central moments. The mean absolute deviation around the mean (which I would have called the first absolute central moment) bounds the difference between mean and median, and is sharper than the standard deviation.
In fact, “Pascal’s mugging” scenarios tend to pop up when you allow for utility distributions with infinite variance.
For Pascal’s Muggings I don’t think you care that much about variance—what you want is a gargantuan skew.