Most people do not have identical values. This means that if you’re trying to help a lot of people, you have to rely on things you can assess most easily. It’s a lot harder to tell how much truth beauty or honor (ESPECIALLY honor) someone has access to than how much running water or whether they have malaria. I say we should concentrate on welfare and let people take care of their own needs for abstract morality, especially considering how much they will disagree on what they want.
Effective altruism doesn’t say anything about general ethics, and I don’t know why you’re claiming it tries to. It’s about how to best help the most people. It’s about charity and reducing worldsuck. I think this is pretty obvious to everyone involved, and I don’t think people are being fooled.
The issue is whether people like the OP and myself, who are interested in reducing worldsuck, but not necessarily in the same kind of way as utilitarians, belong in the EA community or not.
I’m quite confused about this. I think my values are pretty compatible with Yudkowsky’s, but Yudkowsky seems to think he’s an EA. On the other hand, my values seem incompatible with those of e.g. Paul Christiano, who I think everyone would agree clearly is an EA. Yet those two seem to act as though they believed their values were compatible with each other. Now both of them are as intelligent as I, maybe more. So if I update on their apparent beliefs about what sets of values are compatible, should I conclude that I’m an EA, despite my non-endorsement of utilitarianism or any other kind of extreme altruism, or should I instead conclude that I don’t want Yudkowskian FAI after all, and start my own rival world-saving project?
Christiano strikes me as the sort of person who would embrace the Repugnant Conclusion; whereas I think Yudkowsky would ultimately dodge any bullet that required him to give up turning the universe into an interesting sci-fi world whose inhabitants did things like write fanfiction stories.
Nobody actually acts like they believe in total utilitarianism, but Christiano comes as close as anyone I know of to at least threatening to act as if they believe in it. Yudkowsky, having written about complexity of value, doesn’t give me the same worry.
utilitarianism isn’t extreme altruism. It’s just a way of trying to quantify morality. It doesn’t decide what you care about. I’m pretty tired of people reacting to the concept of Utilitarianism with “Oh shit does that mean I need to give away all my money and live subsistence style to be a good person!?” A selfish utilitarian is just as possible as an extremely altruistic one or as one who’s moderately altruistic. Effective altruism is about your ALTRUISM being EFFECTIVE. Not that you NEED to be an effective altruist. When you decide to give to a charity based on their efficiency and percentage that goes to the overhead, you are making an effective altruism decision. This is the case regardless of if your life is dedicated to altruism or if you’re just giving 100 bucks because it’s christmas.
utilitarianism isn’t extreme altruism. It’s just a way of trying to quantify morality. It doesn’t decide what you care about. I’m pretty tired of people reacting to the concept of Utilitarianism with “Oh shit does that mean I need to give away all my money and live subsistence style to be a good person!?” A selfish utilitarian is just as possible as an extremely altruistic one or as one who’s moderately altruistic.
There’s enough ambiguity here that I’m not totally sure, but it sounds like you’re describing consequential ethics, not utilitarianism as such. Utilitarianisms vary in details, but they all imply that people’s utility is fungible, including that of their adherents; that a change in (happiness, fulfillment, preference satisfaction) is just as significant whether it applies to you or to, say, a bricklayer’s son living in a malarial part of Burkina Faso.
It’s certainly possible to claim utilitarian ethics and still prioritize your own utility in practice. But that’s inconsistent—aside from a few quibbles regarding asymmetric information—with being a good person by that standard, if the standard means anything at all.
I’ve always thought a utilitarianism as an effort to quantify “good” and a framework for making moral decisions rather than an imperative. EG, the term Utility Function is a subset of utilitarian theory but does not presuppose utilitarian base motivations. Someone’s utility function consists of their desire to maximize welfare as well as their desires to hope and honor and whatnot.
It’s become increasingly clear that very few people think about it this way.
Most people do not have identical values. This means that if you’re trying to help a lot of people, you have to rely on things you can assess most easily. It’s a lot harder to tell how much truth beauty or honor (ESPECIALLY honor) someone has access to than how much running water or whether they have malaria. I say we should concentrate on welfare and let people take care of their own needs for abstract morality, especially considering how much they will disagree on what they want.
Effective altruism doesn’t say anything about general ethics, and I don’t know why you’re claiming it tries to. It’s about how to best help the most people. It’s about charity and reducing worldsuck. I think this is pretty obvious to everyone involved, and I don’t think people are being fooled.
The issue is whether people like the OP and myself, who are interested in reducing worldsuck, but not necessarily in the same kind of way as utilitarians, belong in the EA community or not.
I’m quite confused about this. I think my values are pretty compatible with Yudkowsky’s, but Yudkowsky seems to think he’s an EA. On the other hand, my values seem incompatible with those of e.g. Paul Christiano, who I think everyone would agree clearly is an EA. Yet those two seem to act as though they believed their values were compatible with each other. Now both of them are as intelligent as I, maybe more. So if I update on their apparent beliefs about what sets of values are compatible, should I conclude that I’m an EA, despite my non-endorsement of utilitarianism or any other kind of extreme altruism, or should I instead conclude that I don’t want Yudkowskian FAI after all, and start my own rival world-saving project?
Could you expand more on the incompatibility you see between Yudkowsky and Christiano’s values?
Christiano strikes me as the sort of person who would embrace the Repugnant Conclusion; whereas I think Yudkowsky would ultimately dodge any bullet that required him to give up turning the universe into an interesting sci-fi world whose inhabitants did things like write fanfiction stories.
Nobody actually acts like they believe in total utilitarianism, but Christiano comes as close as anyone I know of to at least threatening to act as if they believe in it. Yudkowsky, having written about complexity of value, doesn’t give me the same worry.
utilitarianism isn’t extreme altruism. It’s just a way of trying to quantify morality. It doesn’t decide what you care about. I’m pretty tired of people reacting to the concept of Utilitarianism with “Oh shit does that mean I need to give away all my money and live subsistence style to be a good person!?” A selfish utilitarian is just as possible as an extremely altruistic one or as one who’s moderately altruistic. Effective altruism is about your ALTRUISM being EFFECTIVE. Not that you NEED to be an effective altruist. When you decide to give to a charity based on their efficiency and percentage that goes to the overhead, you are making an effective altruism decision. This is the case regardless of if your life is dedicated to altruism or if you’re just giving 100 bucks because it’s christmas.
Not on the traditional usage of the term, it isn’t—and more to the point, not as the term is being used both in the grandparent and the OP.
You’re confusing utilitarianism with plain old instrumental rationality.
There’s enough ambiguity here that I’m not totally sure, but it sounds like you’re describing consequential ethics, not utilitarianism as such. Utilitarianisms vary in details, but they all imply that people’s utility is fungible, including that of their adherents; that a change in (happiness, fulfillment, preference satisfaction) is just as significant whether it applies to you or to, say, a bricklayer’s son living in a malarial part of Burkina Faso.
It’s certainly possible to claim utilitarian ethics and still prioritize your own utility in practice. But that’s inconsistent—aside from a few quibbles regarding asymmetric information—with being a good person by that standard, if the standard means anything at all.
I’ve always thought a utilitarianism as an effort to quantify “good” and a framework for making moral decisions rather than an imperative. EG, the term Utility Function is a subset of utilitarian theory but does not presuppose utilitarian base motivations. Someone’s utility function consists of their desire to maximize welfare as well as their desires to hope and honor and whatnot.
It’s become increasingly clear that very few people think about it this way.
Yep, see the SEP on Utilitarianism and the LW wiki on utility functions.
Except when it talks about fairness, justice and trying to do as much good as possible without restriction.