1) We have a moral obligation to try to do the most net good we can.
2) Your obligation to do so holds regardless of distance or the neglect of others.
3) This creates an unconventionally rigorous and demanding moral standard.
Benquo’s is that
1) Even the best charity impact analysis is too opaque to be believable.
2) The rich have already pledged enough to solve the big problems.
3) Therefore, spend on yourself, spend locally, and on “specific concrete things that might have specific concrete benefits;” also, try to improve our “underlying systems problems.” “There’s no substitute for developing and acting on your own models of the world.”
We must inevitably develop our own models of the world, and it’s important to read impact assessments critically, as we would anything else. I don’t think Benquo makes much of an argument for why or how we should instead spend on ourselves, our local community, or on “specific concrete benefits” as an alternative method of doing good. My understanding of how the world works and what constitutes the good has a strong social basis, and we ought to be just as skeptical of our own observations as we are of others. The reason why EA and impact assessment excites me is because it creates a basis for improving our altruistic strategy over the long term.
I’m open to the idea that local altruism is ultimately the better strategy, but I would need to see an equally strong argument for that side. I just don’t yet. I’ve spent too much time engaged in personal, face-to-face relationship and activism poor people in America and around the world to dismiss the call to almost exclusively focus on populations in extreme poverty. I’m more skeptical of X-risk as an altruistic project, for the same reasons that Benquo critiques GiveWell and because it’s hard for me to see how we sway the military to eschew new weapons.
If he’s rejecting not just earning-to-give, but the whole philosophy of utilitarianism, he hasn’t really refuted any of the core points of Singer’s argument. Opaque analysis should lead us to do our own research, not reject the project of increasing our impact. Neglect by the rich doesn’t mean we too can neglect these funding gaps. If these problems indicate the need for revolutionary change rather than philanthropy, that’s fine, though I have to say that leading off the call to action with “spend money on taking care of yourself and your friends” makes it sound a lot more like motivated reasoning.
I’m rejecting the claim that there exists an infinite pit of suffering that can be remedied cheaply per unit of suffering. If I don’t make constructive suggestions, people claim that I’m just saying everything is terrible or something. If I do, they seem to think that the whole post is an argument for the constructive suggestions. I’m not sure what to do here.
This is meant as a constructive suggestion. I find some of your posts here to be ambiguous.
For example, in your reply here, I can’t tell whether you’re complaining that I, too, am playing into this catch-22 that you describe, or whether instead you feel that my post is more sympathetic to you and thus a place where you can more safely vent your frustration.
As you can see from my first comment in this chain, I was also unsure of how to interpret your original post. Was it an argument for giving up on a moral imperative of altruistic utility-maximization entirely, a re-evaluation how that imperative is best achieved, or a claim that maximization is good in theory but such opportunities don’t exist in practice?
Although everyone should give others a sympathetic and careful reading, if I was in your shoes I might consider whether my writing is clear enough.
Singer’s argument is that
1) We have a moral obligation to try to do the most net good we can.
2) Your obligation to do so holds regardless of distance or the neglect of others.
3) This creates an unconventionally rigorous and demanding moral standard.
Benquo’s is that
1) Even the best charity impact analysis is too opaque to be believable.
2) The rich have already pledged enough to solve the big problems.
3) Therefore, spend on yourself, spend locally, and on “specific concrete things that might have specific concrete benefits;” also, try to improve our “underlying systems problems.” “There’s no substitute for developing and acting on your own models of the world.”
We must inevitably develop our own models of the world, and it’s important to read impact assessments critically, as we would anything else. I don’t think Benquo makes much of an argument for why or how we should instead spend on ourselves, our local community, or on “specific concrete benefits” as an alternative method of doing good. My understanding of how the world works and what constitutes the good has a strong social basis, and we ought to be just as skeptical of our own observations as we are of others. The reason why EA and impact assessment excites me is because it creates a basis for improving our altruistic strategy over the long term.
I’m open to the idea that local altruism is ultimately the better strategy, but I would need to see an equally strong argument for that side. I just don’t yet. I’ve spent too much time engaged in personal, face-to-face relationship and activism poor people in America and around the world to dismiss the call to almost exclusively focus on populations in extreme poverty. I’m more skeptical of X-risk as an altruistic project, for the same reasons that Benquo critiques GiveWell and because it’s hard for me to see how we sway the military to eschew new weapons.
If he’s rejecting not just earning-to-give, but the whole philosophy of utilitarianism, he hasn’t really refuted any of the core points of Singer’s argument. Opaque analysis should lead us to do our own research, not reject the project of increasing our impact. Neglect by the rich doesn’t mean we too can neglect these funding gaps. If these problems indicate the need for revolutionary change rather than philanthropy, that’s fine, though I have to say that leading off the call to action with “spend money on taking care of yourself and your friends” makes it sound a lot more like motivated reasoning.
I’m rejecting the claim that there exists an infinite pit of suffering that can be remedied cheaply per unit of suffering. If I don’t make constructive suggestions, people claim that I’m just saying everything is terrible or something. If I do, they seem to think that the whole post is an argument for the constructive suggestions. I’m not sure what to do here.
This is meant as a constructive suggestion. I find some of your posts here to be ambiguous.
For example, in your reply here, I can’t tell whether you’re complaining that I, too, am playing into this catch-22 that you describe, or whether instead you feel that my post is more sympathetic to you and thus a place where you can more safely vent your frustration.
As you can see from my first comment in this chain, I was also unsure of how to interpret your original post. Was it an argument for giving up on a moral imperative of altruistic utility-maximization entirely, a re-evaluation how that imperative is best achieved, or a claim that maximization is good in theory but such opportunities don’t exist in practice?
Although everyone should give others a sympathetic and careful reading, if I was in your shoes I might consider whether my writing is clear enough.