Diversifying can pay off—even in relatively simple models—where you have inaccurate information. If you think charity A is best—but it ultimately turns out that that is because they spend 99% of their budget on marketing and advertising—then a portfolio with A, B, and C in it would have been highly likely to produce better results than giving everything to charity A.
Maybe you should obtain better information. However, in practice, assessing charities is poorly funded, there are controversies over which ones best support which goals—and getting better information on such topics is another way of spending money.
The bigger the chances of your information being inaccurate, the more it pays to hedge. Inaccurate estimates seem rather likely in the case of “risky” charities—where the benifit involves multiplying a hypothetical small probabiltiy by a hypothetical large benefit—and it is challenging to measure efficacy.
If you mean this my comment would be that that proposes accounting for uncertainty by appropriately penalising the utitilies associated with the charities you are unsure about. However, charities, especially bad charities, might well be trying to manipulate people’s percieved confidence that they are sound—so those figures might be bad.
If perceived utility is negatively correlated (at the top end) with actual utility, as in your example, then your strategy is superior to putting it all in the perceived-best. However, if you expect this to be the case, then you should update your beliefs on perceived utility. If the figures might be bad, account for that in the figures!
If there is even a small correlation, putting it all in one is optimal.
Diversifying can pay off—even in relatively simple models—where you have inaccurate information. If you think charity A is best—but it ultimately turns out that that is because they spend 99% of their budget on marketing and advertising—then a portfolio with A, B, and C in it would have been highly likely to produce better results than giving everything to charity A.
Maybe you should obtain better information. However, in practice, assessing charities is poorly funded, there are controversies over which ones best support which goals—and getting better information on such topics is another way of spending money.
The bigger the chances of your information being inaccurate, the more it pays to hedge. Inaccurate estimates seem rather likely in the case of “risky” charities—where the benifit involves multiplying a hypothetical small probabiltiy by a hypothetical large benefit—and it is challenging to measure efficacy.
I’m hitting the ‘bozo button’ for Tim in this conversation. The math has been explained to him several times over.
If you mean this my comment would be that that proposes accounting for uncertainty by appropriately penalising the utitilies associated with the charities you are unsure about. However, charities, especially bad charities, might well be trying to manipulate people’s percieved confidence that they are sound—so those figures might be bad.
If perceived utility is negatively correlated (at the top end) with actual utility, as in your example, then your strategy is superior to putting it all in the perceived-best. However, if you expect this to be the case, then you should update your beliefs on perceived utility. If the figures might be bad, account for that in the figures!
If there is even a small correlation, putting it all in one is optimal.