I really wish this had been two posts. The first one, about the dihonesty of “matching” gifts where the match would have happened regardless, is spot-on, but perhaps too long for some audiences (the smart ones, who already knew this, or only needed a push to think about the counterfactual).
The second, about “good arbitrage” has an interesting idea, which I had trouble teasing out. Is this a recommendation to large donors that they can actually make their match effective by setting up competition across charities (or segments of a charity)? Something like: donate $50K, split between N charities. Decide on the split based on how much the charities collect as “matches’?
Both sections had a confusing mix of target audiences. Are you writing to development departments, large donors (who provide the matches), or small donors (who decide whether to use matching as part of their decision process)? It would have been clearer and simpler if you’d separated them into different sections.
I think the problem here is that you’re expecting a post oriented towards specific actionable short-run recommendations. I only really care about doing that to the extent that it makes the overall principle clearer. The good situation isn’t one in which people specifically avoid (either as charities or as potential donors) matching donation drives because of the narrow arguments in the post, but one in which people learn the underlying principle, and notice that strategies that reveal information are cooperative, and strategies that conceal or distort information cause harm.
The “good arbitrage” argument cashes out to a recommendation that unless you have strong reason to believe you have epistemic or moral luck in a relevant way, you should be happy to reveal information about your plans, even when this creates the possibility of potential collaborators deciding against investing resources in you, since in general you prefer other good actors to make better, more informed decisions.
strategies that reveal information are cooperative, and strategies that conceal or distort information cause harm.
Agreed, and I wish this had been clearer, earlier in the post.
The “good arbitrage” argument …
Doesn’t that boil down to “arbitrage only happens when information is private”? In a world where everyone shares knowledge (and values, perhaps), the concepts of arbitrage, matching, advertising, structured donations, etc. are all irrelevant. But that’s not where we live.
I don’t think just shared knowledge is sufficient (e.g. advertising is not about giving you information), and if you invoke Aumann and add the shared values requirement, don’t you end up in a situation where everyone is basically the same? You get a hive-mind kind of a society...
Advertising (in which I include matching schemes and other attempts to influence donors) is either:
1) dark arts, at best orthogonal to a charity’s mission, at worst contrary to it (if the mission includes raising the sanity waterline).
2) providing information, possibly in order to gather information (i.e. letting donors know how to communicated, not just give).
Benquo’s “good arbitrage” seemed to be saying that #2 is desirable, and I don’t fully understand the argument. If he’s making the argument from crowdsourcing preferences, then that does lead toward a view that there is a “right” answer which we’re trying to discover from the hive-mind.
As far as I can understand Benquo’s arbitrage argument, it relies on the donor being able to forecast the effectiveness of the program better than the charity itself can. And if you assume the charity and the donor follow the same goal, it makes sense to defer the decision to the better prognosticator.
I really wish this had been two posts. The first one, about the dihonesty of “matching” gifts where the match would have happened regardless, is spot-on, but perhaps too long for some audiences (the smart ones, who already knew this, or only needed a push to think about the counterfactual).
The second, about “good arbitrage” has an interesting idea, which I had trouble teasing out. Is this a recommendation to large donors that they can actually make their match effective by setting up competition across charities (or segments of a charity)? Something like: donate $50K, split between N charities. Decide on the split based on how much the charities collect as “matches’?
Both sections had a confusing mix of target audiences. Are you writing to development departments, large donors (who provide the matches), or small donors (who decide whether to use matching as part of their decision process)? It would have been clearer and simpler if you’d separated them into different sections.
I think the problem here is that you’re expecting a post oriented towards specific actionable short-run recommendations. I only really care about doing that to the extent that it makes the overall principle clearer. The good situation isn’t one in which people specifically avoid (either as charities or as potential donors) matching donation drives because of the narrow arguments in the post, but one in which people learn the underlying principle, and notice that strategies that reveal information are cooperative, and strategies that conceal or distort information cause harm.
The “good arbitrage” argument cashes out to a recommendation that unless you have strong reason to believe you have epistemic or moral luck in a relevant way, you should be happy to reveal information about your plans, even when this creates the possibility of potential collaborators deciding against investing resources in you, since in general you prefer other good actors to make better, more informed decisions.
Agreed, and I wish this had been clearer, earlier in the post.
Doesn’t that boil down to “arbitrage only happens when information is private”? In a world where everyone shares knowledge (and values, perhaps), the concepts of arbitrage, matching, advertising, structured donations, etc. are all irrelevant. But that’s not where we live.
I don’t think just shared knowledge is sufficient (e.g. advertising is not about giving you information), and if you invoke Aumann and add the shared values requirement, don’t you end up in a situation where everyone is basically the same? You get a hive-mind kind of a society...
Advertising (in which I include matching schemes and other attempts to influence donors) is either: 1) dark arts, at best orthogonal to a charity’s mission, at worst contrary to it (if the mission includes raising the sanity waterline). 2) providing information, possibly in order to gather information (i.e. letting donors know how to communicated, not just give).
Benquo’s “good arbitrage” seemed to be saying that #2 is desirable, and I don’t fully understand the argument. If he’s making the argument from crowdsourcing preferences, then that does lead toward a view that there is a “right” answer which we’re trying to discover from the hive-mind.
As far as I can understand Benquo’s arbitrage argument, it relies on the donor being able to forecast the effectiveness of the program better than the charity itself can. And if you assume the charity and the donor follow the same goal, it makes sense to defer the decision to the better prognosticator.