Should effective altruists be praised for their motives, or their results?
It is sometimes claimed, perhaps by those who recently read The Elephant in the Brain, that effective altruists have not risen above the failures of traditional charity, and are every bit as mired in selfish motives as non-EA causes. From a consequentialist view, however, this critique is not by itself valid.
To a consequentialist, it doesn’t actually matter what one’s motives are as long as the actual effect of their action is to do as much good as possible. This is the primary difference between the standard way of viewing morality, and the way that consequentialists view it.
Now, if the critique was that by engaging in unconsciously selfish motives, we are systematically biasing ourselves away from recognizing the most important actions, then this critique becomes sound. Of course then the conversation shifts immediately towards what we can do to remedy the situation. In particular, it hints that we should set up a system which corrects our systematic biases.
Just as a prediction market corrects for systematic biases by rewarding those who predict well, and punishing those who don’t, there are similar ways to incentivize exact honesty in charity. One such method is to praise people in proportion to how much good they really acheive.
Previously, it has been argued in the philosophical literature that consequentialists should praise people for motives rather than results, because punishing someone for accidentally doing something bad when they legitimately meant to help people would do nothing but discourage people from trying to do good. While clearly containing a kernel of truth, this argument is nonetheless flawed.
Similar to how rewarding a student for their actual grades on a final exam will be more effective in getting them to learn the material than rewarding them merely for how hard they tried, rewarding effective altruists for the real results of their actions will incentivize honesty, humility, and effectiveness.
The obvious problem with the framework I have just proposed is that there is currently no such way to praise effective altruists in exact proportion to how effective they are. However, there are ways to approach this ideal.
In the future, prediction markets could be set up to predict the counterfactual result of particular interventions. Effective altruists that are able discover the most effective of these interventions, and act to create them, could be rewarded accordingly.
It is already the case that we can roughly estimate the near-term effects of anti-poverty charities, and thus get a sense as to how many lives people are saving by donating a certain amount of money. Giving people praise in proportion to how many lives they really save could be a valuable endeavor.
Similar to how rewarding a student for their actual grades on a final exam will be more effective in getting them to learn the material than rewarding them merely for how hard they tried
Hmm, I sort of assumed this was obvious. I suppose it depends greatly on how you can inspect whether they are actually trying, or whether they are just “trying.” It’s indeed probable that with sufficient supervision, you can actually do better by incentivizing effort. However, this method is expensive.
Should effective altruists be praised for their motives, or their results?
It is sometimes claimed, perhaps by those who recently read The Elephant in the Brain, that effective altruists have not risen above the failures of traditional charity, and are every bit as mired in selfish motives as non-EA causes. From a consequentialist view, however, this critique is not by itself valid.
To a consequentialist, it doesn’t actually matter what one’s motives are as long as the actual effect of their action is to do as much good as possible. This is the primary difference between the standard way of viewing morality, and the way that consequentialists view it.
Now, if the critique was that by engaging in unconsciously selfish motives, we are systematically biasing ourselves away from recognizing the most important actions, then this critique becomes sound. Of course then the conversation shifts immediately towards what we can do to remedy the situation. In particular, it hints that we should set up a system which corrects our systematic biases.
Just as a prediction market corrects for systematic biases by rewarding those who predict well, and punishing those who don’t, there are similar ways to incentivize exact honesty in charity. One such method is to praise people in proportion to how much good they really acheive.
Previously, it has been argued in the philosophical literature that consequentialists should praise people for motives rather than results, because punishing someone for accidentally doing something bad when they legitimately meant to help people would do nothing but discourage people from trying to do good. While clearly containing a kernel of truth, this argument is nonetheless flawed.
Similar to how rewarding a student for their actual grades on a final exam will be more effective in getting them to learn the material than rewarding them merely for how hard they tried, rewarding effective altruists for the real results of their actions will incentivize honesty, humility, and effectiveness.
The obvious problem with the framework I have just proposed is that there is currently no such way to praise effective altruists in exact proportion to how effective they are. However, there are ways to approach this ideal.
In the future, prediction markets could be set up to predict the counterfactual result of particular interventions. Effective altruists that are able discover the most effective of these interventions, and act to create them, could be rewarded accordingly.
It is already the case that we can roughly estimate the near-term effects of anti-poverty charities, and thus get a sense as to how many lives people are saving by donating a certain amount of money. Giving people praise in proportion to how many lives they really save could be a valuable endeavor.
Evidence for this?
Hmm, I sort of assumed this was obvious. I suppose it depends greatly on how you can inspect whether they are actually trying, or whether they are just “trying.” It’s indeed probable that with sufficient supervision, you can actually do better by incentivizing effort. However, this method is expensive.