for instance it may be easier to exploit to the extent sufficient to get into the top 5
This seems sort of important.
Sure, if I have two algorithms A1 and A2, and A1 spits out a single charity, and A2 spits out an unsorted list of 5 charities, and A1 is easy for people to exploit but A2 is much more difficult for people to exploit, it’s entirely plausible that I’ll do better using A2, even if that means spreading my resources among five charities.
OTOH, if A2 is just as easy for people to exploit as A1, it’s not clear that this gets me any benefit at all. And if A2 is easier to exploit, it leaves me actively worse off.
Granted, if, as in your turret example, A2 is simply (A1 plus some random noise), A2 cannot be easier to game than A1. And, sure, if (as in your turret example) all I care about is that I’ve hit the best charity with some of my money, random diversification of the sort you recommend works well.
I suspect that some people donating to charities have different goals.
This seems sort of important.
Sure, if I have two algorithms A1 and A2, and A1 spits out a single charity, and A2 spits out an unsorted list of 5 charities, and A1 is easy for people to exploit but A2 is much more difficult for people to exploit, it’s entirely plausible that I’ll do better using A2, even if that means spreading my resources among five charities.
OTOH, if A2 is just as easy for people to exploit as A1, it’s not clear that this gets me any benefit at all.
And if A2 is easier to exploit, it leaves me actively worse off.
Granted, if, as in your turret example, A2 is simply (A1 plus some random noise), A2 cannot be easier to game than A1. And, sure, if (as in your turret example) all I care about is that I’ve hit the best charity with some of my money, random diversification of the sort you recommend works well.
I suspect that some people donating to charities have different goals.