I agree with Vaniver, that it would be good to give more time to arguments that the EA movement is going to do large net harm. You touch on this a bit with the discussion of Communism and moral disagreement within the movement, but one could go further. Some speculative ways in which the EA movement could have bad consequences:
The EA movement, driven by short-term QALYs, pulls effort away from affecting science and policy in rich countries with long-term impacts to brief alleviation of problems for poor humans and animals
AMF-style interventions increase population growth and lower average world income and education, which leads to fumbling of long-run trajectories or existential risk
The EA movement screws up population ethics and the valuation of different minds in such a way that it doesn’t just fail to find good interventions, but pursues actively terrible ones (e.g. making things much worse by trading off human and ant conditions wrongly)
Even if the movement mostly does not turn towards promoting bad things, it turns out to be easier to screw things up than to help, and foolish proponents of conflicting sub-ideologies collectively make things worse for everyone, PD style; you see this in animal activists enthused about increasing poverty to reduce meat consumption, or poverty activists happy to create huge deadweight GDP losses as long as resources are transferred to the poor,
Something like explicit hedonistic utilitarianism becomes an official ideology somewhere, in the style of Communist states (even though the members don’t really embrace it in full on every matter, they nominally endorse it as universal and call their contrary sentiments weakness of will): the doctrine implies that all sentient beings should be killed and replaced by some kind of simulated orgasm-neurons and efficient caretaker robots (or otherwise sacrifice much potential value in the name of a cramped conception of value), and society is pushed in this direction by a tragedy of the commons; also, see Robin Hanson
Misallocating a huge mass of idealists’ human capital to donation for easily measurable things and away from more effective things elsewhere, sabotages more effective do-gooding for a net worsening of the world
The EA movement gets into politics and can’t clearly evaluate various policies with huge upside and downside potential because of ideological blinders, and winds up with a massive net downside
The EA movement finds extremely important issues, and then turns the public off from them with its fanaticism, warts, or fumbling, so that it would have been better to have left those issues to other institutions
Hmm. I didn’t interpret a hypothetical apostasy as the fiercest critique, but rather the best critique—i.e. weight the arguments not by “badness if true” but by something like badness times plausibility.
But you may be right that I unconsciously biased myself towards arguments that were easier to solve by tweaking the EA movement’s direction. For instance, I should probably have included a section about measurability bias, which does seem plausibly quite bad.
I don’t have time to explain it now, so I will state the following with the hope merely stating it will be useful as a data point. I think Carl’s critique was more compelling, more relevant if true (which you agree), and also not that much less likely to be true than yours. Certainly, considering the fact of how destructive they would be if true, and the fact they are almost as likely to be true as yours, I think Carl’s is the best critique.
In fact, the 2nd main reason I don’t direct most of my efforts to what most of the EA movement is doing is because I do think some weaker versions of Carl’s points are true. (The 1st is simply that I’m much better at finding out if his points are true and other more abstract things than at doing EA stuff).
For instance, I should probably have included a section about measurability bias, which does seem plausibly quite bad.
This does show up in the poor cause choices section, and I’m not sure it deserves a section of its own (though I do suspect it’s the most serious reason for poor cause selection, beyond the underlying population ethics being bad).
“Hmm. I didn’t interpret a hypothetical apostasy as the fiercest critique, but rather the best critique—i.e. weight the arguments not by “badness if true” but by something like badness times plausibility.”
Hmm. I didn’t interpret a hypothetical apostasy as the fiercest critique, but rather the best critique—i.e. weight the arguments not by “badness if true” but by something like badness times plausibility.
Odds are if someone benefits from doing a hypothetical apostasy, then they can’t be trusted to be accurate in terms of plausibility. You’d want at least to get the worst case scenario for plausibility, or simply neglect plausibility and later make sure that the things you feel are “very implausible” are in fact very implausible.
I’m slightly suspicious of the whole hypothetical apostasy—it feels like proofreading, but I find it almost impossible to thoroughly proof myself. Wouldn’t it be easier and better to find well-qualified critics, if these exist, and leave hypothetical apostasy for when decent critics can’t be found? Although I suppose that it would already have been implied by hypothetical apostasy, as it would be a lazy apostate who didn’t research support for his position.
Yes, I don’t consider either the CEO of a GiveWell competitor or a couple of theologians to be well-qualified to critique effective altruism. Part of my motivation in writing this was specifically the abysmal quality of such critiques.
I think that e.g. Michael Vassar is a much more qualified outside critic (outside in the sense of not associating with the EA movement) and indeed several of my arguments here were inspired by him (as filtered through my ability to interpret his sometimes oracular remarks, so he can feel free to disown the results, though he hasn’t yet). Some of what I’m doing is making these outside critiques more visible to effective altruists—although arguably a true outsider would be able to make them more forcefully through lack of bias, Vassar understandably would rather spend his time on other stuff, so the best workable option is writing them up myself.
I didn’t mean that you can just take other people’s critiques as sound nor unbiased, but I can guarantee you that the GiveWell competitor won’t share your bias.
In theory, you’re even his intended audience (liking EA but not 100% convinced), which means that if he’s doing his job right the arguments would be tailored to you. (Though I suspect tailoring an argument for rationalists might require different skills than tailoring it for other types of groups.)
Many of these issues seem related to arrow’s impossibility theorem; if groups have genuinely different values, and we optimize for one set not another, ants get tiny apartments and people starve, or we destroy the world economy because we discount too much, etc.
To clarify, I think LessWrong thinks most issues are simple, because we know little about them; we want to just fix it. As an example, poverty isn’t solved for good reasons; it’s hard to balance incentives and growth, and deal with heterogeneity, there exist absolute limits on current wealth and the ability to move it around, and the competing priorities of nations and individuals. It’s not unsolved because people are too stupid to give money to feed the poor charities. We underestimate the rest of of the world because we’re really good at one thing, and think everyone is stupid for not being good at it—and even if we’re right, we’re not good at (understanding) many other things, and some of those things matter for fixing these problems.
Note: Arrow’s Impossibility Theorem is not actually a serious philosophical hurdle for a utilitarian (though related issues such as the Gibbard-Satterthwaite theorem may be). That is to say: it is absolutely trivial to create a social utility function which meets all of Arrow’s “impossible” criteria, if you simply allow cardinal instead of just ordinal utility. (Arrow’s theorem is based on a restriction to ordinal cases.)
Thank you for the clarification; despite this, cardinal utility is difficult because it assumes that we care about different preferences the same amount, or definably different amounts.
Unless there is a commodity that can adequately represent preferences (like money) and a fair redistribution mechanism, we still have problems maximizing overall welfare.
No argument here. It’s hard to build a good social welfare function in theory (ie, even if you can assume away information limitations), and harder in practice (with people actively manipulating it). My point was that it is a mistake to think that Arrow showed it was impossible.
(Also: I appreciate the “thank you”, but it would feel more sincere if it came with an upvote.)
I had upvoted you. Also, I used Arrow as a shorthand for that class of theorem, since they all show that a class of group decision problem is unsolvable—mostly because I can never remember how to spell Satterthewaite.
Disclaimer: I like and support the EA movement.
I agree with Vaniver, that it would be good to give more time to arguments that the EA movement is going to do large net harm. You touch on this a bit with the discussion of Communism and moral disagreement within the movement, but one could go further. Some speculative ways in which the EA movement could have bad consequences:
The EA movement, driven by short-term QALYs, pulls effort away from affecting science and policy in rich countries with long-term impacts to brief alleviation of problems for poor humans and animals
AMF-style interventions increase population growth and lower average world income and education, which leads to fumbling of long-run trajectories or existential risk
The EA movement screws up population ethics and the valuation of different minds in such a way that it doesn’t just fail to find good interventions, but pursues actively terrible ones (e.g. making things much worse by trading off human and ant conditions wrongly)
Even if the movement mostly does not turn towards promoting bad things, it turns out to be easier to screw things up than to help, and foolish proponents of conflicting sub-ideologies collectively make things worse for everyone, PD style; you see this in animal activists enthused about increasing poverty to reduce meat consumption, or poverty activists happy to create huge deadweight GDP losses as long as resources are transferred to the poor,
Something like explicit hedonistic utilitarianism becomes an official ideology somewhere, in the style of Communist states (even though the members don’t really embrace it in full on every matter, they nominally endorse it as universal and call their contrary sentiments weakness of will): the doctrine implies that all sentient beings should be killed and replaced by some kind of simulated orgasm-neurons and efficient caretaker robots (or otherwise sacrifice much potential value in the name of a cramped conception of value), and society is pushed in this direction by a tragedy of the commons; also, see Robin Hanson
Misallocating a huge mass of idealists’ human capital to donation for easily measurable things and away from more effective things elsewhere, sabotages more effective do-gooding for a net worsening of the world
The EA movement gets into politics and can’t clearly evaluate various policies with huge upside and downside potential because of ideological blinders, and winds up with a massive net downside
The EA movement finds extremely important issues, and then turns the public off from them with its fanaticism, warts, or fumbling, so that it would have been better to have left those issues to other institutions
Hmm. I didn’t interpret a hypothetical apostasy as the fiercest critique, but rather the best critique—i.e. weight the arguments not by “badness if true” but by something like badness times plausibility.
But you may be right that I unconsciously biased myself towards arguments that were easier to solve by tweaking the EA movement’s direction. For instance, I should probably have included a section about measurability bias, which does seem plausibly quite bad.
I don’t have time to explain it now, so I will state the following with the hope merely stating it will be useful as a data point. I think Carl’s critique was more compelling, more relevant if true (which you agree), and also not that much less likely to be true than yours. Certainly, considering the fact of how destructive they would be if true, and the fact they are almost as likely to be true as yours, I think Carl’s is the best critique.
In fact, the 2nd main reason I don’t direct most of my efforts to what most of the EA movement is doing is because I do think some weaker versions of Carl’s points are true. (The 1st is simply that I’m much better at finding out if his points are true and other more abstract things than at doing EA stuff).
This does show up in the poor cause choices section, and I’m not sure it deserves a section of its own (though I do suspect it’s the most serious reason for poor cause selection, beyond the underlying population ethics being bad).
“Hmm. I didn’t interpret a hypothetical apostasy as the fiercest critique, but rather the best critique—i.e. weight the arguments not by “badness if true” but by something like badness times plausibility.”
See http://www.amirrorclear.net/academic/papers/risk.pdf. Plausibility depends on your current model/arguments/evidence. If the badness times probability of these being wrong dwarfs the former, you must account for it.
Odds are if someone benefits from doing a hypothetical apostasy, then they can’t be trusted to be accurate in terms of plausibility. You’d want at least to get the worst case scenario for plausibility, or simply neglect plausibility and later make sure that the things you feel are “very implausible” are in fact very implausible.
I’m slightly suspicious of the whole hypothetical apostasy—it feels like proofreading, but I find it almost impossible to thoroughly proof myself. Wouldn’t it be easier and better to find well-qualified critics, if these exist, and leave hypothetical apostasy for when decent critics can’t be found? Although I suppose that it would already have been implied by hypothetical apostasy, as it would be a lazy apostate who didn’t research support for his position.
The Elitist Philanthropy of So-Called Effective Altruism
Enterprise Is the Most “Effective Altruism”
I suppose a problem with other critics is that their values likely differ from yours.
Yes, I don’t consider either the CEO of a GiveWell competitor or a couple of theologians to be well-qualified to critique effective altruism. Part of my motivation in writing this was specifically the abysmal quality of such critiques.
I think that e.g. Michael Vassar is a much more qualified outside critic (outside in the sense of not associating with the EA movement) and indeed several of my arguments here were inspired by him (as filtered through my ability to interpret his sometimes oracular remarks, so he can feel free to disown the results, though he hasn’t yet). Some of what I’m doing is making these outside critiques more visible to effective altruists—although arguably a true outsider would be able to make them more forcefully through lack of bias, Vassar understandably would rather spend his time on other stuff, so the best workable option is writing them up myself.
I didn’t mean that you can just take other people’s critiques as sound nor unbiased, but I can guarantee you that the GiveWell competitor won’t share your bias.
In theory, you’re even his intended audience (liking EA but not 100% convinced), which means that if he’s doing his job right the arguments would be tailored to you. (Though I suspect tailoring an argument for rationalists might require different skills than tailoring it for other types of groups.)
Many of these issues seem related to arrow’s impossibility theorem; if groups have genuinely different values, and we optimize for one set not another, ants get tiny apartments and people starve, or we destroy the world economy because we discount too much, etc.
To clarify, I think LessWrong thinks most issues are simple, because we know little about them; we want to just fix it. As an example, poverty isn’t solved for good reasons; it’s hard to balance incentives and growth, and deal with heterogeneity, there exist absolute limits on current wealth and the ability to move it around, and the competing priorities of nations and individuals. It’s not unsolved because people are too stupid to give money to feed the poor charities. We underestimate the rest of of the world because we’re really good at one thing, and think everyone is stupid for not being good at it—and even if we’re right, we’re not good at (understanding) many other things, and some of those things matter for fixing these problems.
Note: Arrow’s Impossibility Theorem is not actually a serious philosophical hurdle for a utilitarian (though related issues such as the Gibbard-Satterthwaite theorem may be). That is to say: it is absolutely trivial to create a social utility function which meets all of Arrow’s “impossible” criteria, if you simply allow cardinal instead of just ordinal utility. (Arrow’s theorem is based on a restriction to ordinal cases.)
Thank you for the clarification; despite this, cardinal utility is difficult because it assumes that we care about different preferences the same amount, or definably different amounts.
Unless there is a commodity that can adequately represent preferences (like money) and a fair redistribution mechanism, we still have problems maximizing overall welfare.
No argument here. It’s hard to build a good social welfare function in theory (ie, even if you can assume away information limitations), and harder in practice (with people actively manipulating it). My point was that it is a mistake to think that Arrow showed it was impossible.
(Also: I appreciate the “thank you”, but it would feel more sincere if it came with an upvote.)
I had upvoted you. Also, I used Arrow as a shorthand for that class of theorem, since they all show that a class of group decision problem is unsolvable—mostly because I can never remember how to spell Satterthewaite.