I’m entirely unconvinced about the not diversifying the spendings. If you assume that your algorithm for choice of charity might be faulty in an exploitable way, the #1 charity may be sufficiently able and motivated to exploit you—having all your money as reward (and money of anyone who’s reasoning like you) - but all the top #5 , five times less so.
Let’s consider selfish actions to engage our primarily selfish intelligence. Should you invest in 1 corporation, the one you deem most effective? The investment to pay-off scenario matches that of charitable giving rather well, except you are the beneficiary (and you do care not to invest in something that flops over and goes bankrupt)
Of course it is the case that in investments, and in charitable giving, people diversify for entirely wrong reasons, and perhaps over-diversify. But then, the very same people, when told not to diversify, may well respond by donating less overall, for a lower expected benefit.
Should you invest in 1 corporation, the one you deem most effective? The investment to pay-off scenario matches that of charitable giving rather well, except you are the beneficiary (and you do care not to invest in something that flops over and goes bankrupt)
You have strong reason not to do this anyway because of risk aversion. This is like saying, “Should you serve butter or margarine to your guests? To get a better intuition, consider the selfish version, where you are yourself going to eat either pristine butter, or a container of margarine that has been poisoned with arsenic?”
If you assume that your algorithm for choice of charity might be faulty in an exploitable way, the #1 charity may be sufficiently able and motivated to exploit you—having all your money as reward (and money of anyone who’s reasoning like you) - but all the top #5 , five times less so.
I agree this is an issue, and that you should take manipulable signals as weaker evidence because of Goodhart’s Law. But this effect doesn’t automatically dominate. Selecting for good expected value with your best efforts incentivizes efforts to produce signals of value, through real as well as fakeable signals.
Note that GiveWell and friends do not follow your heuristic: the great majority of funds flow to the top charity. They take into account the possibility of faked data (to mess with CBA) in their evaluation process, valuing independent verification, defenses against publication bias, audits, and so forth. But in light of those efforts, they think that the benefits of incentivizing (and assisting) more effective and transparent charities outweigh the risk of incentivizing fakers who can defeat their strong countermeasures.
Okay, here’s the model: the expected utility of $1 to chosen top 5 charities is nearly equal (due to inaccuracy in evaluation of the utility), and the charities are nearly linear (not super-linear). The expected utility of donating $x to charity i is x*a[i] , and for top 5 a[i] the a[i] values are very close to equal. [They are very close to equal because of your inability to evaluate the utilities of donations to charities]
(for reasonable values of x; we already determined that multi-billionaire needs to diversify)
Thus the combined utility of paying $100 to each of the top 5 charities is then nearly equal to utility of paying $500 to the top one. There is slight loss because the expected utility of the #1 charity is very slightly above that of #5.
At the same time, the strategic reasoning is as follows: the function i (and people like me) used for selecting top (or top 5 even) charities may be exploitable. When the donation is split between top 5, each has 1⁄5 the incentive to exploit, so the decision to split between top 5, while unable to affect anything about the contribution right now, affects the future payoff of exploitative strategies (and if known beforehand, affects the past payoff estimates as well).
Of course the above reasoning does not work at all if you are grossly over confident in your evaluations of charities and assume some giant differences between expected utility of the top 5, differences which you furthermore had detected correctly.
I think “exploit” is a bad way of looking at it , for the reasons that pengvado objects to. However, there’s also the possibility that you’re running an incorrect algorithm, or have otherwise made some fault in reasoning when selecting the Top #1 charity.
Also, if numerous people run the same algorithm, you’re more likely to run in to over-saturation issues with a “single charity” model (a thousand people all decide to donate $100 this month—suddenly Charity A has $100K, and can only efficiently use, say, $20K). I’d mostly see this coming up when a major influencer (such as a news story) pushes a large number of people to suddenly donate, without being able to easily “cap” that influence (i.e. the news is unlikely to say “okay, Haiti disaster funding is good, stop now”)
It’s important to realize that if we have, say, a 50% chance of being wrong about each charity, and we’re donating $100, we’re still producing a net result of $50 worth of charity regardless of how we split it. However, if we put all our eggs in one basket, we get either $100 or $0 worth of charity. With five different charities, we have a bell curve of $100, $80, $60, $40, $20, $0 as possibilities.
If charity is linear, it doesn’t matter. However, I’d suspect that there’s incentives to a bell curve—both because it minimizes the worst case $0 benefit scenario, and simply out of an aesthetic/personal preference for less risky investments. (If nothing else, risk-adverse individuals will probably donate more to a bell curve than an “all or nothing” gambit)
Obviously I’m simplifying with the idea of an “all or nothing” gambit for the most part (but a fraudulent charity really could be such!), but I think it illustrates why splitting donations really is beneficial even if “shut up and multiply” says they’re approximately equal.
Also, if numerous people run the same algorithm, you’re more likely to run in to over-saturation issues with a “single charity” model (a thousand people all decide to donate $100 this month—suddenly Charity A has $100K, and can only efficiently use, say, $20K). I’d mostly see this coming up when a major influencer (such as a news story) pushes a large number of people to suddenly donate, without being able to easily “cap” that influence (i.e. the news is unlikely to say “okay, Haiti disaster funding is good, stop now”)
If ‘numerous’ people manage to actually select and overload the same charity, that charity probably has someone running a similar algorithm and will be smart enough to pass the money on to choice #2. (Funnily enough, charities can and do donate to other charities.)
“that charity probably has someone running a similar algorithm”
That does not follow, unless you’re assuming a community of perfect rationalists.
I’m assuming here a community of average people, where Reporter Sara happened to run a personal piece about her favorite charity, Honest Bob’s Second Hand Charity, which pulls in $50K/year. The story goes viral, and suddenly Honest Bob has a million dollars in donations, no clue how to best put it to use, and a genuine conviction that his charity is truly the best one out there.
Even if we assume a community of rational donators, that doesn’t mean the charity is itself rational. If the charity won’t rationally handle over-saturation (over-confidence in it’s own abilities, lack of knowledge about other charities, overhead of distributing, social repercussions, etc., etc.), then the community has to handle it. The ideal would probably be a meta-organization: Honest Bob can only really handle $50K more, so everyone donates $100, $50K goes to Honest Bob, and then the rest is split proportionally and refunded or invested in to second-pick charities.
However, the meta-organization is just running the same splitting algorithm on a larger scale. You could just as easily have everyone donate $5 instead of $100, and Honest Bob now has his $50K without the overhead expenses of such a meta-organization.
So, unless you’re dealing with a Perfectly Rational charity that can both recognize and respond to it’s own over-saturation point, splitting is still a rational tactic.
If there’s many charities competing to exploit the same ranking heuristic, then your proposal replaces an incentive of (probability p of stealing all of the donations) with (probability 5*p of stealing 1⁄5 of the donations). That doesn’t look like an improvement to me.
The effort towards exploitation of a ranking heuristics is not restricted to set [the most convenient for you value that you pick when you rationalize], 0 . The effort to pay off curve is flattened out at the high effort side when the higher level of efforts don’t get you any better than being in the top 5.
It is clear you are rationalizing; the 5p>1 when p>0.2 (which it can be if one is to expend sufficiently greater effort towards raising p than anyone else); and thus 5p can’t possibly make sense.
I’m entirely unconvinced about the not diversifying the spendings. If you assume that your algorithm for choice of charity might be faulty in an exploitable way, the #1 charity may be sufficiently able and motivated to exploit you—having all your money as reward (and money of anyone who’s reasoning like you) - but all the top #5 , five times less so.
Let’s consider selfish actions to engage our primarily selfish intelligence. Should you invest in 1 corporation, the one you deem most effective? The investment to pay-off scenario matches that of charitable giving rather well, except you are the beneficiary (and you do care not to invest in something that flops over and goes bankrupt)
Of course it is the case that in investments, and in charitable giving, people diversify for entirely wrong reasons, and perhaps over-diversify. But then, the very same people, when told not to diversify, may well respond by donating less overall, for a lower expected benefit.
You have strong reason not to do this anyway because of risk aversion. This is like saying, “Should you serve butter or margarine to your guests? To get a better intuition, consider the selfish version, where you are yourself going to eat either pristine butter, or a container of margarine that has been poisoned with arsenic?”
I agree this is an issue, and that you should take manipulable signals as weaker evidence because of Goodhart’s Law. But this effect doesn’t automatically dominate. Selecting for good expected value with your best efforts incentivizes efforts to produce signals of value, through real as well as fakeable signals.
Note that GiveWell and friends do not follow your heuristic: the great majority of funds flow to the top charity. They take into account the possibility of faked data (to mess with CBA) in their evaluation process, valuing independent verification, defenses against publication bias, audits, and so forth. But in light of those efforts, they think that the benefits of incentivizing (and assisting) more effective and transparent charities outweigh the risk of incentivizing fakers who can defeat their strong countermeasures.
Your first paragraph assumes that giving $5 to the top charity is of no more value than giving $1 to that charity.
If you don’t believe me, come up with a formal model that doesn’t assume that and see what it says. Just do the math.
Okay, here’s the model: the expected utility of $1 to chosen top 5 charities is nearly equal (due to inaccuracy in evaluation of the utility), and the charities are nearly linear (not super-linear). The expected utility of donating $x to charity i is x*a[i] , and for top 5 a[i] the a[i] values are very close to equal. [They are very close to equal because of your inability to evaluate the utilities of donations to charities]
(for reasonable values of x; we already determined that multi-billionaire needs to diversify)
Thus the combined utility of paying $100 to each of the top 5 charities is then nearly equal to utility of paying $500 to the top one. There is slight loss because the expected utility of the #1 charity is very slightly above that of #5.
At the same time, the strategic reasoning is as follows: the function i (and people like me) used for selecting top (or top 5 even) charities may be exploitable. When the donation is split between top 5, each has 1⁄5 the incentive to exploit, so the decision to split between top 5, while unable to affect anything about the contribution right now, affects the future payoff of exploitative strategies (and if known beforehand, affects the past payoff estimates as well).
Of course the above reasoning does not work at all if you are grossly over confident in your evaluations of charities and assume some giant differences between expected utility of the top 5, differences which you furthermore had detected correctly.
I think “exploit” is a bad way of looking at it , for the reasons that pengvado objects to. However, there’s also the possibility that you’re running an incorrect algorithm, or have otherwise made some fault in reasoning when selecting the Top #1 charity.
Also, if numerous people run the same algorithm, you’re more likely to run in to over-saturation issues with a “single charity” model (a thousand people all decide to donate $100 this month—suddenly Charity A has $100K, and can only efficiently use, say, $20K). I’d mostly see this coming up when a major influencer (such as a news story) pushes a large number of people to suddenly donate, without being able to easily “cap” that influence (i.e. the news is unlikely to say “okay, Haiti disaster funding is good, stop now”)
It’s important to realize that if we have, say, a 50% chance of being wrong about each charity, and we’re donating $100, we’re still producing a net result of $50 worth of charity regardless of how we split it. However, if we put all our eggs in one basket, we get either $100 or $0 worth of charity. With five different charities, we have a bell curve of $100, $80, $60, $40, $20, $0 as possibilities.
If charity is linear, it doesn’t matter. However, I’d suspect that there’s incentives to a bell curve—both because it minimizes the worst case $0 benefit scenario, and simply out of an aesthetic/personal preference for less risky investments. (If nothing else, risk-adverse individuals will probably donate more to a bell curve than an “all or nothing” gambit)
Obviously I’m simplifying with the idea of an “all or nothing” gambit for the most part (but a fraudulent charity really could be such!), but I think it illustrates why splitting donations really is beneficial even if “shut up and multiply” says they’re approximately equal.
If ‘numerous’ people manage to actually select and overload the same charity, that charity probably has someone running a similar algorithm and will be smart enough to pass the money on to choice #2. (Funnily enough, charities can and do donate to other charities.)
“that charity probably has someone running a similar algorithm”
That does not follow, unless you’re assuming a community of perfect rationalists.
I’m assuming here a community of average people, where Reporter Sara happened to run a personal piece about her favorite charity, Honest Bob’s Second Hand Charity, which pulls in $50K/year. The story goes viral, and suddenly Honest Bob has a million dollars in donations, no clue how to best put it to use, and a genuine conviction that his charity is truly the best one out there.
Even if we assume a community of rational donators, that doesn’t mean the charity is itself rational. If the charity won’t rationally handle over-saturation (over-confidence in it’s own abilities, lack of knowledge about other charities, overhead of distributing, social repercussions, etc., etc.), then the community has to handle it. The ideal would probably be a meta-organization: Honest Bob can only really handle $50K more, so everyone donates $100, $50K goes to Honest Bob, and then the rest is split proportionally and refunded or invested in to second-pick charities.
However, the meta-organization is just running the same splitting algorithm on a larger scale. You could just as easily have everyone donate $5 instead of $100, and Honest Bob now has his $50K without the overhead expenses of such a meta-organization.
So, unless you’re dealing with a Perfectly Rational charity that can both recognize and respond to it’s own over-saturation point, splitting is still a rational tactic.
If there’s many charities competing to exploit the same ranking heuristic, then your proposal replaces an incentive of (probability p of stealing all of the donations) with (probability 5*p of stealing 1⁄5 of the donations). That doesn’t look like an improvement to me.
http://lesswrong.com/lw/aid/heuristics_and_biases_in_charity/63gy—second half addresses specifically why “5p 1/5” might be preferred to 1p. In short, “5p 1/5″ produces a bell curve instead of an “all or nothing” gambit.
The effort towards exploitation of a ranking heuristics is not restricted to set [the most convenient for you value that you pick when you rationalize], 0 . The effort to pay off curve is flattened out at the high effort side when the higher level of efforts don’t get you any better than being in the top 5.
It is clear you are rationalizing; the 5p>1 when p>0.2 (which it can be if one is to expend sufficiently greater effort towards raising p than anyone else); and thus 5p can’t possibly make sense.