I should have explained it better. You take n samples, and calculate the mean of those samples. You do that a bunch of times, and create a new distribution of those means of samples. Then you take the median of that.
This gives a tradeoff between mean and median. As n goes to infinity, you just get the mean. As n goes to 1, you just get the median. Values in between are a compromise. n = 100 will roughly ignore things that have less than 1% chance of happening (as opposed to less than 50% chance of happening, like the standard median.)
There is a variety of ways to get a tradeoff between the mean and the median (or, more generally, between an efficient but not robust estimator and a robust but not efficient estimator). The real question is how do you decide what a good tradeoff is.
Basically if your mean and your median are different, your distribution is asymmetric. If you want a single-point summary of the entire distribution, you need to decide how to deal with that asymmetry. Until you specify some criteria under which you’ll be optimizing your single-point summary you can’t really talk about what’s better and what’s worse.
This is just one of many possible algorithms which trade off between median and mean. Unfortunately there is no objective way to determine which one is best (or the setting of the hyperparameter.)
The criteria we are optimizing is just “how closely does it match the behavior we actually want.”
I should have explained it better. You take n samples, and calculate the mean of those samples. You do that a bunch of times, and create a new distribution of those means of samples. Then you take the median of that.
This gives a tradeoff between mean and median. As n goes to infinity, you just get the mean. As n goes to 1, you just get the median. Values in between are a compromise. n = 100 will roughly ignore things that have less than 1% chance of happening (as opposed to less than 50% chance of happening, like the standard median.)
There is a variety of ways to get a tradeoff between the mean and the median (or, more generally, between an efficient but not robust estimator and a robust but not efficient estimator). The real question is how do you decide what a good tradeoff is.
Basically if your mean and your median are different, your distribution is asymmetric. If you want a single-point summary of the entire distribution, you need to decide how to deal with that asymmetry. Until you specify some criteria under which you’ll be optimizing your single-point summary you can’t really talk about what’s better and what’s worse.
This is just one of many possible algorithms which trade off between median and mean. Unfortunately there is no objective way to determine which one is best (or the setting of the hyperparameter.)
The criteria we are optimizing is just “how closely does it match the behavior we actually want.”
EDIT: Stuart Armstrong’s idea is much better: http://lesswrong.com/r/discussion/lw/mqk/mean_of_quantiles/
And what is “the behavior we actually want”?