To clarify a few points that may have been lost behind abstractions:
Suppose there is a sub-population of donors, people who do not understand physics very well, and do not understand how one could just claim that a device won’t work without thorough analysis of a blueprint. Those people may be inclined to donate to the research charity working on magnetic free energy devices, if such charity exists; a high payoff low probability scenario.
Suppose you have N such people willing to donate, on average, $M to cause or causes.
Two strategies are considered: donating to 1 subjectively best charity, or 5 subjectively top charities.
Under the strategy to donate to 1 ‘best’ charity, the pay off for a magnetic perpetual motion device charity, if it is to be created, is 5 times larger than under the strategy to divide between top 5 . There is five times the reward for exploitation of this particular insecurity in the choice process; for sufficiently large M and N single-charity donating will cross the threshold whereby such charity will be economically viable, and some semi-cranks semi-frauds will jump on it.
But what’s about the people donating to normal charities, like the water and mosquito nets and the like? The difference between top normal charities boil down to fairly inaccurate value judgements about which most people do not feel particularly certain.
Ultimately, the issue is that the correlation of your selection of charity with the charity’s actual efficacy is affected by your choice. It is similar to the gun turret example.
There is two types of uncertainty here. The probabilistic uncertainty, from which expected utility can be straightforwardly evaluated, and the systematic bias which is unknown to the agent but may be known to other agents (e.g. inferred from observations).
To clarify a few points that may have been lost behind abstractions:
Suppose there is a sub-population of donors, people who do not understand physics very well, and do not understand how one could just claim that a device won’t work without thorough analysis of a blueprint. Those people may be inclined to donate to the research charity working on magnetic free energy devices, if such charity exists; a high payoff low probability scenario.
Suppose you have N such people willing to donate, on average, $M to cause or causes.
Two strategies are considered: donating to 1 subjectively best charity, or 5 subjectively top charities.
Under the strategy to donate to 1 ‘best’ charity, the pay off for a magnetic perpetual motion device charity, if it is to be created, is 5 times larger than under the strategy to divide between top 5 . There is five times the reward for exploitation of this particular insecurity in the choice process; for sufficiently large M and N single-charity donating will cross the threshold whereby such charity will be economically viable, and some semi-cranks semi-frauds will jump on it.
But what’s about the people donating to normal charities, like the water and mosquito nets and the like? The difference between top normal charities boil down to fairly inaccurate value judgements about which most people do not feel particularly certain.
Ultimately, the issue is that the correlation of your selection of charity with the charity’s actual efficacy is affected by your choice. It is similar to the gun turret example.
There is two types of uncertainty here. The probabilistic uncertainty, from which expected utility can be straightforwardly evaluated, and the systematic bias which is unknown to the agent but may be known to other agents (e.g. inferred from observations).