Hmm. I didn’t interpret a hypothetical apostasy as the fiercest critique, but rather the best critique—i.e. weight the arguments not by “badness if true” but by something like badness times plausibility.
But you may be right that I unconsciously biased myself towards arguments that were easier to solve by tweaking the EA movement’s direction. For instance, I should probably have included a section about measurability bias, which does seem plausibly quite bad.
I don’t have time to explain it now, so I will state the following with the hope merely stating it will be useful as a data point. I think Carl’s critique was more compelling, more relevant if true (which you agree), and also not that much less likely to be true than yours. Certainly, considering the fact of how destructive they would be if true, and the fact they are almost as likely to be true as yours, I think Carl’s is the best critique.
In fact, the 2nd main reason I don’t direct most of my efforts to what most of the EA movement is doing is because I do think some weaker versions of Carl’s points are true. (The 1st is simply that I’m much better at finding out if his points are true and other more abstract things than at doing EA stuff).
For instance, I should probably have included a section about measurability bias, which does seem plausibly quite bad.
This does show up in the poor cause choices section, and I’m not sure it deserves a section of its own (though I do suspect it’s the most serious reason for poor cause selection, beyond the underlying population ethics being bad).
“Hmm. I didn’t interpret a hypothetical apostasy as the fiercest critique, but rather the best critique—i.e. weight the arguments not by “badness if true” but by something like badness times plausibility.”
Hmm. I didn’t interpret a hypothetical apostasy as the fiercest critique, but rather the best critique—i.e. weight the arguments not by “badness if true” but by something like badness times plausibility.
Odds are if someone benefits from doing a hypothetical apostasy, then they can’t be trusted to be accurate in terms of plausibility. You’d want at least to get the worst case scenario for plausibility, or simply neglect plausibility and later make sure that the things you feel are “very implausible” are in fact very implausible.
I’m slightly suspicious of the whole hypothetical apostasy—it feels like proofreading, but I find it almost impossible to thoroughly proof myself. Wouldn’t it be easier and better to find well-qualified critics, if these exist, and leave hypothetical apostasy for when decent critics can’t be found? Although I suppose that it would already have been implied by hypothetical apostasy, as it would be a lazy apostate who didn’t research support for his position.
Yes, I don’t consider either the CEO of a GiveWell competitor or a couple of theologians to be well-qualified to critique effective altruism. Part of my motivation in writing this was specifically the abysmal quality of such critiques.
I think that e.g. Michael Vassar is a much more qualified outside critic (outside in the sense of not associating with the EA movement) and indeed several of my arguments here were inspired by him (as filtered through my ability to interpret his sometimes oracular remarks, so he can feel free to disown the results, though he hasn’t yet). Some of what I’m doing is making these outside critiques more visible to effective altruists—although arguably a true outsider would be able to make them more forcefully through lack of bias, Vassar understandably would rather spend his time on other stuff, so the best workable option is writing them up myself.
I didn’t mean that you can just take other people’s critiques as sound nor unbiased, but I can guarantee you that the GiveWell competitor won’t share your bias.
In theory, you’re even his intended audience (liking EA but not 100% convinced), which means that if he’s doing his job right the arguments would be tailored to you. (Though I suspect tailoring an argument for rationalists might require different skills than tailoring it for other types of groups.)
Hmm. I didn’t interpret a hypothetical apostasy as the fiercest critique, but rather the best critique—i.e. weight the arguments not by “badness if true” but by something like badness times plausibility.
But you may be right that I unconsciously biased myself towards arguments that were easier to solve by tweaking the EA movement’s direction. For instance, I should probably have included a section about measurability bias, which does seem plausibly quite bad.
I don’t have time to explain it now, so I will state the following with the hope merely stating it will be useful as a data point. I think Carl’s critique was more compelling, more relevant if true (which you agree), and also not that much less likely to be true than yours. Certainly, considering the fact of how destructive they would be if true, and the fact they are almost as likely to be true as yours, I think Carl’s is the best critique.
In fact, the 2nd main reason I don’t direct most of my efforts to what most of the EA movement is doing is because I do think some weaker versions of Carl’s points are true. (The 1st is simply that I’m much better at finding out if his points are true and other more abstract things than at doing EA stuff).
This does show up in the poor cause choices section, and I’m not sure it deserves a section of its own (though I do suspect it’s the most serious reason for poor cause selection, beyond the underlying population ethics being bad).
“Hmm. I didn’t interpret a hypothetical apostasy as the fiercest critique, but rather the best critique—i.e. weight the arguments not by “badness if true” but by something like badness times plausibility.”
See http://www.amirrorclear.net/academic/papers/risk.pdf. Plausibility depends on your current model/arguments/evidence. If the badness times probability of these being wrong dwarfs the former, you must account for it.
Odds are if someone benefits from doing a hypothetical apostasy, then they can’t be trusted to be accurate in terms of plausibility. You’d want at least to get the worst case scenario for plausibility, or simply neglect plausibility and later make sure that the things you feel are “very implausible” are in fact very implausible.
I’m slightly suspicious of the whole hypothetical apostasy—it feels like proofreading, but I find it almost impossible to thoroughly proof myself. Wouldn’t it be easier and better to find well-qualified critics, if these exist, and leave hypothetical apostasy for when decent critics can’t be found? Although I suppose that it would already have been implied by hypothetical apostasy, as it would be a lazy apostate who didn’t research support for his position.
The Elitist Philanthropy of So-Called Effective Altruism
Enterprise Is the Most “Effective Altruism”
I suppose a problem with other critics is that their values likely differ from yours.
Yes, I don’t consider either the CEO of a GiveWell competitor or a couple of theologians to be well-qualified to critique effective altruism. Part of my motivation in writing this was specifically the abysmal quality of such critiques.
I think that e.g. Michael Vassar is a much more qualified outside critic (outside in the sense of not associating with the EA movement) and indeed several of my arguments here were inspired by him (as filtered through my ability to interpret his sometimes oracular remarks, so he can feel free to disown the results, though he hasn’t yet). Some of what I’m doing is making these outside critiques more visible to effective altruists—although arguably a true outsider would be able to make them more forcefully through lack of bias, Vassar understandably would rather spend his time on other stuff, so the best workable option is writing them up myself.
I didn’t mean that you can just take other people’s critiques as sound nor unbiased, but I can guarantee you that the GiveWell competitor won’t share your bias.
In theory, you’re even his intended audience (liking EA but not 100% convinced), which means that if he’s doing his job right the arguments would be tailored to you. (Though I suspect tailoring an argument for rationalists might require different skills than tailoring it for other types of groups.)