Thanks for this post. The relationship between EA and well-known moral theories is something I’ve wanted to blog about in the past.
So here are a few points:
1. EA does not equal utilitarianism.
Utilitarianism makes many claims that EA does not make:
EA does not claim whether it’s obligatory or merely supererogatory to spend one’s resources helping others; utilitarianism claims that it is obligatory.
EA does not make a claim about whether there are side-constraints - certain things that it is impermissible to do, even if it were for the greater good. Utilitarianism claims that it’s always obligatory to act for the greater good.
EA does not claim that there are no other things besides welfare that are of value; utilitarianism does claim this.
EA does not make a precise claim about what promoting welfare consists in (for example, whether it’s more important to give one unit of welfare to someone who is worse-off than someone who is better-off; or whether hedonistic, preference-satisfactionist or objective list theories of wellbeing are correct); any specific form of utilitarianism does make a precise claim about this.
Also, note that some eminent EAs are not even consequentialist leaning, let alone utilitarian: e.g. Thomas Pogge (political philosopher) and Andreas Mogensen (Assistant Director of Giving What We Can) explicitly endorse a rights-based theory of morality; Alex Foster (epic London EtG-er) and Catriona MacKay (head of the GWWC London chapter) are both Christian (and presumably not consequentialist, though I haven’t asked).
2. Rather, EA is something that almost every plausible moral theory is in favour of.
Almost every plausible moral theory thinks that promoting the welfare of others in an effective way is a good thing to do. Some moral theories that promoting the welfare of others is merely supererogatory, and others think that there are other values at stake. But EA is explicitly pro promoting welfare; it’s not anti other things, and it doesn’t claim that we’re obligated to be altruistic, merely that it’s a good thing to do.
3. Is EA explicitly welfarist?
The term ‘altruism’ suggests that it is. And I think that’s fine. Helping others is what EAs do. Maybe you want to do other things effectively, but then it’s not effective altruism—it’s “effective justice”, “effective environmental preservation”, or something. Note, though, that you may well think that there are non-welfarist values—indeed, I would think that you would be mistaken not to act as if there were, on moral uncertainty grounds alone—but still be part of the effective altruism movement because you think that, in practice, welfare improvement is the most important thing to focus on.
So, to answer your dilemma:
EA is not trying to be the whole of morality.
It might be the whole of morality, if being EA is the only thing that is required of one. But it’s not part of the EA package that EA is the whole of morality. Rather, it represents one aspect of morality—an aspect that is very important for those living in affluent countries, and who have tremendous power to help others. The idea that we in rich countries should be trying to work out how to help others as effectively as possible, and then actually going ahead and doing it, is an important part of almost every plausible moral theory.
I think the simple answer is that “effective altruism” is a vague term. I gave you what I thought was the best way of making it precise. Weeatquince, and Luke Muelhauser wanted to make it precise in a different way. We could have a debate about which is the more useful precisifcation, but I don’t think that here is the right place for that.
On either way of making the term precise, though, EA is clearly not trying to be the whole of morality, or to give any one very specific conception of morality. It doesn’t make a claim about side-constraints; it doesn’t make a claim about whether doing good is supererogatory or obligatory; it doesn’t make a claim about the nature of welfare. EA is broad tent, and deliberately so: very many different ethical perspectives will agree, for example, that it’s important to find out which charities do the most to improve the welfare of those living in extreme poverty (as measured by QALYs etc), and then encouraging people to give to those charities. If so, then we’ve got an important activity that people of very many different ethical backgrounds can get behind—which is great!
EA does not make a precise claim about what promoting welfare consists in (for example, whether it’s more important to give one unit of welfare to someone who is worse-off than someone who is better-off; or whether hedonistic, preference-satisfactionist or objective list theories of wellbeing are correct); any specific form of utilitarianism does make a precise claim about this.
That’s rather a double standard there. Any specific form of EA does make a precise claim about what should be maximized.
Hi,
Thanks for this post. The relationship between EA and well-known moral theories is something I’ve wanted to blog about in the past.
So here are a few points:
1. EA does not equal utilitarianism.
Utilitarianism makes many claims that EA does not make:
EA does not claim whether it’s obligatory or merely supererogatory to spend one’s resources helping others; utilitarianism claims that it is obligatory.
EA does not make a claim about whether there are side-constraints - certain things that it is impermissible to do, even if it were for the greater good. Utilitarianism claims that it’s always obligatory to act for the greater good.
EA does not claim that there are no other things besides welfare that are of value; utilitarianism does claim this.
EA does not make a precise claim about what promoting welfare consists in (for example, whether it’s more important to give one unit of welfare to someone who is worse-off than someone who is better-off; or whether hedonistic, preference-satisfactionist or objective list theories of wellbeing are correct); any specific form of utilitarianism does make a precise claim about this.
Also, note that some eminent EAs are not even consequentialist leaning, let alone utilitarian: e.g. Thomas Pogge (political philosopher) and Andreas Mogensen (Assistant Director of Giving What We Can) explicitly endorse a rights-based theory of morality; Alex Foster (epic London EtG-er) and Catriona MacKay (head of the GWWC London chapter) are both Christian (and presumably not consequentialist, though I haven’t asked).
2. Rather, EA is something that almost every plausible moral theory is in favour of.
Almost every plausible moral theory thinks that promoting the welfare of others in an effective way is a good thing to do. Some moral theories that promoting the welfare of others is merely supererogatory, and others think that there are other values at stake. But EA is explicitly pro promoting welfare; it’s not anti other things, and it doesn’t claim that we’re obligated to be altruistic, merely that it’s a good thing to do.
3. Is EA explicitly welfarist?
The term ‘altruism’ suggests that it is. And I think that’s fine. Helping others is what EAs do. Maybe you want to do other things effectively, but then it’s not effective altruism—it’s “effective justice”, “effective environmental preservation”, or something. Note, though, that you may well think that there are non-welfarist values—indeed, I would think that you would be mistaken not to act as if there were, on moral uncertainty grounds alone—but still be part of the effective altruism movement because you think that, in practice, welfare improvement is the most important thing to focus on.
So, to answer your dilemma:
EA is not trying to be the whole of morality.
It might be the whole of morality, if being EA is the only thing that is required of one. But it’s not part of the EA package that EA is the whole of morality. Rather, it represents one aspect of morality—an aspect that is very important for those living in affluent countries, and who have tremendous power to help others. The idea that we in rich countries should be trying to work out how to help others as effectively as possible, and then actually going ahead and doing it, is an important part of almost every plausible moral theory.
Thanks for the response. I agree with most of the territory covered, of course, but my objection here is to the framing, not the philosophy.
So why does the website explicitly list fairness, justice and trying to do as much good as possible as EA goals in themselves? And why does user:weeatquince (whose identity we both know but I will not ‘out’ on a public forum) think that “actions and organizations that are ethical through ways other than producing welfare/happiness, as long as they apply rationality to doing good” are EA?
I think the simple answer is that “effective altruism” is a vague term. I gave you what I thought was the best way of making it precise. Weeatquince, and Luke Muelhauser wanted to make it precise in a different way. We could have a debate about which is the more useful precisifcation, but I don’t think that here is the right place for that.
On either way of making the term precise, though, EA is clearly not trying to be the whole of morality, or to give any one very specific conception of morality. It doesn’t make a claim about side-constraints; it doesn’t make a claim about whether doing good is supererogatory or obligatory; it doesn’t make a claim about the nature of welfare. EA is broad tent, and deliberately so: very many different ethical perspectives will agree, for example, that it’s important to find out which charities do the most to improve the welfare of those living in extreme poverty (as measured by QALYs etc), and then encouraging people to give to those charities. If so, then we’ve got an important activity that people of very many different ethical backgrounds can get behind—which is great!
sd
That’s rather a double standard there. Any specific form of EA does make a precise claim about what should be maximized.