Thanks for the post. I think this is an important discussion. Though I’m also sympathetic to Nick’s comment that a significant amount of extra self-reflection is not the most important thing to EA’s success.
I just wanted to flag that I think there are attempts to deal with some of these issues, and explain why I think some of these issues are not a problem.
Philosophical difficulties
Effective altruism was founded by philosophers, so I think there’s enough effort going into this, including population ethics. (See Nick’s comment)
Poor cause choices
There’s a lot being done on this front:
GiveWell is running Labs, and Holden has said he expects to find better donation opportunities in the next few years outside of global health
CEA is an advocate of further cause prioritisation research, and is about to hire Owen Cotton-Barratt, to work full-time on it.
80k is about to release a list of recommended causes, which will not have global health at the top.
Non-obviousness
I think the more useful framing of this problem is ‘what’s the competitive advantage that has let us come up with these ideas rather than anyone else?’ I think more work on this question would be useful. This also deals with the efficient markets problem. If you don’t have an answer to this question, I agree you should be worried.
I’ve thought about it in the context of 80k, and have some ideas (unfortunately I haven’t had time to write about them publicly). I now think the bigger priority is just to try out 80k and see how well it works. More generally, we try to take our disagreements with elite common sense very seriously.
I don’t think recency is a problem. It seems reasonable that EA could only develop after we had things like the internet, good quality trial data of different interventions, and Singer’s pond argument (which required a certain level of global inequality and globalization), which are all relatively recent.
Inconsistent attitude toward rigor
I think this is mainly because people use the best analysis that’s out there, and the best analysis for charity is currently much more in-depth than it is for these other issues. We’re trying to make progress on the other issues at 80k and CEA.
Poor psychological understanding
My impression is that people at CEA have worried about these problems quite a bit. At 80k, we try to work on this problem by highlighting members who are really trying rather than rationalising what they want, which we hope will encourage good norms. We’ll also consider calling people out, but it can be a delicate issue!
Monoculture
I’m worried about this, but it’s difficult to change. All we can do is try to make an active effort to reach out to new groups.
Community problems
I don’t see the decline in quality of the FB group as a problem. EA was started by some of the smartest, most well meaning people I have ever met. It’s going to be almost impossible to avoid a decline in quality of discussion as the circle is widened.
I’ll also push back against equating the community with the FB group. There are efforts by other EA groups to build better venues for the community e.g. the EA Summit by Leverage. We don’t even need a good FB group so long as there are other ways for people to form projects (e.g. speak to 80k’s careers coaches) and get good information (read GiveWell’s research).
Thanks for responding. I’ve responded to points below.
Poor cause choices
There’s a lot being done on this front: GiveWell is running Labs, and Holden has said he expects to find better donation opportunities in the next few years outside of global health CEA is an advocate of further cause prioritisation research, and is about to hire Owen Cotton-Barratt, to work full-time on it. * 80k is about to release a list of recommended causes, which will not have global health at the top.
The point of this argument wasn’t that organizations aren’t working on it. In fact the existence of this research strengthens my point, which was that people are donating now anyway despite the fact that it looks like we know very little now and the attitude towards giving now vs. later seems to be “well there’s a good case for either one” rather than “we really need to figure this out because we may be pouring money down the drain”, which is evidence that people are stopping thinking at the level of “doesn’t obviously conflict with EA principles”.
Inconsistent attitude toward rigor
I think this is mainly because people use the best analysis that’s out there, and the best analysis for charity is currently much more in-depth than it is for these other issues. We’re trying to make progress on the other issues at 80k and CEA.
Again, the issue isn’t that nobody is trying to solve these, it’s that most people are way more worried about the charity analysis issue than ancillary issues that are just as important. If our knowledge of e.g. cost-effectiveness of global health interventions was as limited as our knowledge elsewhere, would people be donating to global health charities? I doubt it.
Poor psychological understanding
My impression is that people at CEA have worried about these problems quite a bit. At 80k, we try to work on this problem by highlighting members who are really trying rather than rationalising what they want, which we hope will encourage good norms. We’ll also consider calling people out, but it can be a delicate issue!
I’ve been following 80k and have not noticed this phenomenon. Can you give some examples?
Monoculture
I’m worried about this, but it’s difficult to change. All we can do is try to make an active effort to reach out to new groups.
This is definitely not all we can do (unless you take a tautologically broad interpretation of “make an active effort to reach out”). For instance, if a substantial fraction of effective altruists were raging sexists, it would be wise to fix our group norms before going “hey women! there’s this thing called effective altruism!”
Even supposing it is all we can do, is there anything we’re actually doing about it?
EA was started by some of the smartest, most well meaning people I have ever met. It’s going to be almost impossible to avoid a decline in quality of discussion as the circle is widened.
The point of the critique was not to list easily avoidable problems, but to list bad problems. If decline in quality of people is inevitable, then we better find some solutions to the problems it brings (e.g. epistemic inertia), or the decline of EA is inevitable too.
Read the response to poor cause choice and inconsistent attitude toward rigor as “while some EAs might be donating without enough thought, lots of others are investing most of their resources in doing more research”
The monoculture problem is something we often think about how to fix at 80k. We haven’t come up with great solutions yet though.
I also argued that the decline in the FB group is not obviously important.
And if it’s difficult to avoid, but many movements started by a small group of smart people nevertheless go on to achieve a lot, that’s also evidence that it’s not important.
Hi Ben,
Thanks for the post. I think this is an important discussion. Though I’m also sympathetic to Nick’s comment that a significant amount of extra self-reflection is not the most important thing to EA’s success.
I just wanted to flag that I think there are attempts to deal with some of these issues, and explain why I think some of these issues are not a problem.
Philosophical difficulties
Effective altruism was founded by philosophers, so I think there’s enough effort going into this, including population ethics. (See Nick’s comment)
Poor cause choices
There’s a lot being done on this front:
GiveWell is running Labs, and Holden has said he expects to find better donation opportunities in the next few years outside of global health
CEA is an advocate of further cause prioritisation research, and is about to hire Owen Cotton-Barratt, to work full-time on it.
80k is about to release a list of recommended causes, which will not have global health at the top.
Non-obviousness
I think the more useful framing of this problem is ‘what’s the competitive advantage that has let us come up with these ideas rather than anyone else?’ I think more work on this question would be useful. This also deals with the efficient markets problem. If you don’t have an answer to this question, I agree you should be worried.
I’ve thought about it in the context of 80k, and have some ideas (unfortunately I haven’t had time to write about them publicly). I now think the bigger priority is just to try out 80k and see how well it works. More generally, we try to take our disagreements with elite common sense very seriously.
I don’t think recency is a problem. It seems reasonable that EA could only develop after we had things like the internet, good quality trial data of different interventions, and Singer’s pond argument (which required a certain level of global inequality and globalization), which are all relatively recent.
Inconsistent attitude toward rigor
I think this is mainly because people use the best analysis that’s out there, and the best analysis for charity is currently much more in-depth than it is for these other issues. We’re trying to make progress on the other issues at 80k and CEA.
Poor psychological understanding
My impression is that people at CEA have worried about these problems quite a bit. At 80k, we try to work on this problem by highlighting members who are really trying rather than rationalising what they want, which we hope will encourage good norms. We’ll also consider calling people out, but it can be a delicate issue!
Monoculture
I’m worried about this, but it’s difficult to change. All we can do is try to make an active effort to reach out to new groups.
Community problems
I don’t see the decline in quality of the FB group as a problem. EA was started by some of the smartest, most well meaning people I have ever met. It’s going to be almost impossible to avoid a decline in quality of discussion as the circle is widened.
I’ll also push back against equating the community with the FB group. There are efforts by other EA groups to build better venues for the community e.g. the EA Summit by Leverage. We don’t even need a good FB group so long as there are other ways for people to form projects (e.g. speak to 80k’s careers coaches) and get good information (read GiveWell’s research).
Hi Ben,
Thanks for responding. I’ve responded to points below.
The point of this argument wasn’t that organizations aren’t working on it. In fact the existence of this research strengthens my point, which was that people are donating now anyway despite the fact that it looks like we know very little now and the attitude towards giving now vs. later seems to be “well there’s a good case for either one” rather than “we really need to figure this out because we may be pouring money down the drain”, which is evidence that people are stopping thinking at the level of “doesn’t obviously conflict with EA principles”.
Again, the issue isn’t that nobody is trying to solve these, it’s that most people are way more worried about the charity analysis issue than ancillary issues that are just as important. If our knowledge of e.g. cost-effectiveness of global health interventions was as limited as our knowledge elsewhere, would people be donating to global health charities? I doubt it.
I’ve been following 80k and have not noticed this phenomenon. Can you give some examples?
This is definitely not all we can do (unless you take a tautologically broad interpretation of “make an active effort to reach out”). For instance, if a substantial fraction of effective altruists were raging sexists, it would be wise to fix our group norms before going “hey women! there’s this thing called effective altruism!”
Even supposing it is all we can do, is there anything we’re actually doing about it?
The point of the critique was not to list easily avoidable problems, but to list bad problems. If decline in quality of people is inevitable, then we better find some solutions to the problems it brings (e.g. epistemic inertia), or the decline of EA is inevitable too.
Read the response to poor cause choice and inconsistent attitude toward rigor as “while some EAs might be donating without enough thought, lots of others are investing most of their resources in doing more research”
The monoculture problem is something we often think about how to fix at 80k. We haven’t come up with great solutions yet though.
I also argued that the decline in the FB group is not obviously important. And if it’s difficult to avoid, but many movements started by a small group of smart people nevertheless go on to achieve a lot, that’s also evidence that it’s not important.