Diversification reduces the payoff for appearing better. Therefore it reduces the payoff of investing in fake signals of being better. But it also reduces the payoff of investments in actually being better!
Good charities are motivated by their objective. It’s rather bad charities for which actually performing better is simply one of the means to looking good for sake of some entirely different terminal goal. You are correct about the latter.
If you can’t distinguish good from bad, you should focus on improving your ability to distinguish between them, not blindly diversify.
I do concede that under unusually careful and secure (in the software security sense) evaluation it may be sufficiently resistant to cheating.
However, if you were parsing potentially turing-complete statements by the possible charity, verified the statement for approximate internal consistency, and then as a result of this clearly insecure process obtained enormously high number of, say, 8 lives per dollar, that’s an entirely different story. If your evaluation process got security hole, the largest number that falls through will be scam.
edit:
In the limit, if donors ignored quality indicators, spreading donations evenly among all charities, all this would do is incentivize the formation of lots of tiny charities that don’t do anything at all, just collect most of the diversification donations. If you can’t distinguish good from bad, you should focus on improving your ability to distinguish between them, not blindly diversify.
Wrong limit. The optimum amount of diversification is dependent to how secure is the evaluation process (how expensive it is for someone to generate a ‘donation basilisk’ output, which, upon reading, compels the reader to donate). Yes, ideally you should entirely eliminate the possibility of such ‘donation basilisk’ data, and then donate to the top charity. Practically, the degree of basilisk-proofness is a given that is very difficult to change, and you are making donation decision in the now.
Good charities are motivated by their objective. It’s rather bad charities for which actually performing better is simply one of the means to looking good for sake of some entirely different terminal goal. You are correct about the latter.
I do concede that under unusually careful and secure (in the software security sense) evaluation it may be sufficiently resistant to cheating.
However, if you were parsing potentially turing-complete statements by the possible charity, verified the statement for approximate internal consistency, and then as a result of this clearly insecure process obtained enormously high number of, say, 8 lives per dollar, that’s an entirely different story. If your evaluation process got security hole, the largest number that falls through will be scam.
edit:
Wrong limit. The optimum amount of diversification is dependent to how secure is the evaluation process (how expensive it is for someone to generate a ‘donation basilisk’ output, which, upon reading, compels the reader to donate). Yes, ideally you should entirely eliminate the possibility of such ‘donation basilisk’ data, and then donate to the top charity. Practically, the degree of basilisk-proofness is a given that is very difficult to change, and you are making donation decision in the now.