Yes. My point is that this new biased estimate is not your ‘real estimate’ - this is simply not your best guess/posterior distribution given your information. But as I remarked above your rational actions given a skewed loss function resemble the actions of a rational agent with a less risk-averse loss function with a different estimate, so in order to determine your actions you can compute what [an agent with a less skewed loss function and your (deliberately) biased estimate] would do, and then just copy those actions.
But despite all of this, you still want to be unbiased. It’s fine to use the computational shortcut mentioned above to deal with skewed loss functions, but you need your beliefs to stay as accurate as possible to not get strange future behaviour. A small, simplified example:
Suppose you are in possesion of 1001$ total (all your assets included), and it costs $1000 to buy a cure for a fatal disease you happen to have/a ticket to heaven/insurance for cryonics. You most definitely don’t want to lose more than one dollar. Then a guy walks up to you and offers a bet—you pay 2$, after which you are given a box which contains between 0$ and 10$, with uniform probability (this strange guy is losing money, yes). Clearly you don’t take the bet—since you don’t actually care much whether you have 1000$ or 1001$ or 1009$, but would be terribly sad if you had only 999$. But instead of doing the utility calculation you can also absorb this into your probability distribution of the box—you only care about scenarios where the box contains less than a dollar, so you focus most of your attention on this, and estimate that the box will contain less than a dollar. The problem now arises if you happen to find a dollar on the street—it is now a good idea to buy a box, although the agents who have started to believe the box contains at most a dollar will not buy it.
To summarise: absorbing sharp effects of your utility function into biased estimates can be a decent temporary computational hack, but it is dangerous to call the partial results you work with in the process ‘estimates’, since they in no way represent your beliefs.
P.S.: The example above isn’t all that great, it was the best I could come up with right now. If it is unclear, or unclear how the example is (supposedly) related to the discussion above, I can try to find a better example.
To summarise: absorbing sharp effects of your utility function into biased estimates can be a decent temporary computational hack, but it is dangerous to call the partial results you work with in the process ‘estimates’, since they in no way represent your beliefs.
It seems to me that it’s best to use “your beliefs” to refer to the entire underlying distribution. Yes, you should not bias your beliefs—but the point of estimates is to compress the entire underlying distribution into “the useful part,” and what is the useful part will depend primarily on your application’s loss function, not a generalized unbiased loss function.
My point is that this new biased estimate is not your ‘real estimate’ - this is simply not your best guess/posterior distribution given your information.
Sure it is my “real” estimate—because I take real action on its basis.
Let me make a few observations.
First, any “best” estimate narrower than a complete probability distribution implies some loss function which you are minimizing in order to figure out which estimate is “best”. Let’s take the plain-vanilla case of estimating the central point of a distribution which produced some sample of real numbers. The usual estimate for that is the average of the sample numbers (the sample mean) and it is indeed optimal (“the best”) for a particular, quadratic, loss function. But, for example, change the loss function to absolute deviation (L1) and now the median becomes “the best estimate”.
The point is that to prefer any estimate over some other estimate, you must have a loss function already. If you are calling some estimate “best”, this implies a particular loss function.
Second, the usefulness of any estimate is determined by the use you intend for it. “Suitability for a purpose” is an overriding criterion for estimates you produce. Different purposes (“produce an unbiased estimate” and “select a course of action” are different purposes) often require different estimates.
Third, “unbiased” is not an unalloyed blessing. In many situations you face the bias-variance tradeoff and sometimes you do want to have some bias.
Yes. My point is that this new biased estimate is not your ‘real estimate’ - this is simply not your best guess/posterior distribution given your information. But as I remarked above your rational actions given a skewed loss function resemble the actions of a rational agent with a less risk-averse loss function with a different estimate, so in order to determine your actions you can compute what [an agent with a less skewed loss function and your (deliberately) biased estimate] would do, and then just copy those actions.
But despite all of this, you still want to be unbiased. It’s fine to use the computational shortcut mentioned above to deal with skewed loss functions, but you need your beliefs to stay as accurate as possible to not get strange future behaviour. A small, simplified example:
Suppose you are in possesion of 1001$ total (all your assets included), and it costs $1000 to buy a cure for a fatal disease you happen to have/a ticket to heaven/insurance for cryonics. You most definitely don’t want to lose more than one dollar. Then a guy walks up to you and offers a bet—you pay 2$, after which you are given a box which contains between 0$ and 10$, with uniform probability (this strange guy is losing money, yes). Clearly you don’t take the bet—since you don’t actually care much whether you have 1000$ or 1001$ or 1009$, but would be terribly sad if you had only 999$. But instead of doing the utility calculation you can also absorb this into your probability distribution of the box—you only care about scenarios where the box contains less than a dollar, so you focus most of your attention on this, and estimate that the box will contain less than a dollar. The problem now arises if you happen to find a dollar on the street—it is now a good idea to buy a box, although the agents who have started to believe the box contains at most a dollar will not buy it.
To summarise: absorbing sharp effects of your utility function into biased estimates can be a decent temporary computational hack, but it is dangerous to call the partial results you work with in the process ‘estimates’, since they in no way represent your beliefs.
P.S.: The example above isn’t all that great, it was the best I could come up with right now. If it is unclear, or unclear how the example is (supposedly) related to the discussion above, I can try to find a better example.
It seems to me that it’s best to use “your beliefs” to refer to the entire underlying distribution. Yes, you should not bias your beliefs—but the point of estimates is to compress the entire underlying distribution into “the useful part,” and what is the useful part will depend primarily on your application’s loss function, not a generalized unbiased loss function.
Sure it is my “real” estimate—because I take real action on its basis.
Let me make a few observations.
First, any “best” estimate narrower than a complete probability distribution implies some loss function which you are minimizing in order to figure out which estimate is “best”. Let’s take the plain-vanilla case of estimating the central point of a distribution which produced some sample of real numbers. The usual estimate for that is the average of the sample numbers (the sample mean) and it is indeed optimal (“the best”) for a particular, quadratic, loss function. But, for example, change the loss function to absolute deviation (L1) and now the median becomes “the best estimate”.
The point is that to prefer any estimate over some other estimate, you must have a loss function already. If you are calling some estimate “best”, this implies a particular loss function.
Second, the usefulness of any estimate is determined by the use you intend for it. “Suitability for a purpose” is an overriding criterion for estimates you produce. Different purposes (“produce an unbiased estimate” and “select a course of action” are different purposes) often require different estimates.
Third, “unbiased” is not an unalloyed blessing. In many situations you face the bias-variance tradeoff and sometimes you do want to have some bias.