I get the impression that you’re not well informed about EA and the diverse stances EAs have, and that you’re singling out an idiosyncratic interpretation and giving it an unfair treatment.
Effective altruism is inefficient and socially suboptima.
The first link you cite talks about public good provision within the current economy. How do you conclude from this that e.g. the effective altruists focused on AI safety are being inefficient? And even if you’re talking about e.g. donations to GiveWell’s recommended charities, how does the first link establish that it’s inefficient? Sick people in Africa usually tend to not be included in calculations about economical common goods, but EAs care about more than just their country’s economy.
Effective Altruism isn’t utilitarian. It’s explicitly welfarist and given the complexity of individual value, probably undermines overall utility, including your own.
FYI, you’re using highly idiosyncratic terminology here. Outside of LW, “utilitarianism” is the name for a family of consequentialist views that also include solely welfare-focused varieties like negative hedonistic utilitarianism or classical hedonistic utilitarianism.
In addition, you repeat the mantra that it’s an objective fact that “human values are complex”. That’s misleading, what’s complex is human moral intuitions. When you define your goal in life, no one forces you to incorporate every single intuition that you have. You may instead choose to regard some of your intuitions as more important than others, and thereby end up with a utility function of low complexity. Your terminal values are not discovered somewhere within you (how would that process work, exactly?), they are chosen. As EY would say, “the buck has to stop somewhere”.
EA is prioritarian.
This claim is wrong, only about 5% of the EAs I know are prioritiarians (I have met close to 100 EAs personally). And the link you cite doesn’t support that EAs are prioritarians either, it just argues that you get more QALYs from donating to AMF than from doing other things.
How do you conclude from this that e.g. the effective altruists focused on AI safety are being inefficient?
Yes, as you stated I was working with the visible sample of EA’s who aren’t focused on existential risk. I feel the term in relation to existential risk is redundant since effective thinking about existential risk on Lesswrong.
And even if you’re talking about e.g. donations to GiveWell’s recommended charities, how does the first link establish that it’s inefficient?
The crowding out effect occurs not just as the individual level (which isn’t applicable to individual EA’s given room for more funding consideration), but also at the movement level. Because EA’s act en-bloc, and factor into their considerations ‘what are other people not funding’, they compete the supply a demand for donations against established institutional donors like the Gate’s Foundation. One might wonder then that if that was true, why those Foundations don’t close the funding gaps as a priority—and it looks like someone is trying to answer that here. Admittedly, I haven’t got to reading the article fully but from a quick skim it looks like the magnitude of donations of high impact philanthropists is such it compensates for the ‘ineffectiveness of their cause’, since those charities Givewell recommends have less room for more funding—which becomes a higher order consideration at that scale. The obvious counterexample to this is GiveDirectly, but I wouldn’t be suprised if the reason philanthropists don’t like them is because of fear of setting a precedence (sp?) against productive mutualistic exchange.
“human values are complex”. That’s misleading, what’s complex is human moral intuitions. When you define your goal in life, no one forces you to incorporate every single intuition that you have. You may instead choose to regard some of your intuitions as more important than others, and thereby end up with a utility function of low complexity. Your terminal values are not discovered somewhere within you (how would that process work, exactly?), they are chosen. As EY would say, “the buck has to stop somewhere”.
I can’t find the original post about the buck stopping after a bit of Googling. I’d like to keep looking into this!
I can’t find the original post about the buck stopping after a bit of Googling. I’d like to keep looking into this!
The post I’m referring to is here, but I should note that EY used the phrase in a different context, and my view on terminal values does not reflect his view. My critique of the idea that all human values are complex is that it presupposes too narrow of an interpretation of “values”. Let’s talk about “goals” instead, defined as follows:
Imagine you could shape yourself and the world any way you like, unconstrained by the limits of what is considered feasible and what not, what would you do? Which changes would you make? The result describes your ideal world, it describes everything that is at all important to you. However, it does not yet describe how important these things are in relation to other things you consider important. So imagine that you had the same super-powers, but this time they are limited: You cannot make every change you had in mind, you need to prioritize some changes over others. Which changes would be most important to you? The outcome of this thought experiment approximates your goals. (This question is of course a very difficult one, and what someone says after thinking about it for five minutes might be quite different from what someone would choose if she had heard all the ethical arguments in the world and thought about the matter for a very long time. If you care about making decisions for good/informed reasons, you might want to refrain from committing too much to specific answers and instead give weight to what a better informed version of yourself would say after longer reflection.)
I took the definition from this blogpost I wrote a while back. The comment section there contains a long discussion on a similar issue where I elaborate on my view of terminal values.
Anyway, the way my definition of “goals” seems to differ from the interpretation of “values” in the phrase “human values are complex” is that “goals” allow for self-modification. If I could, I would self-modify into a utilitarian super-robot, regardless of whether it was still conscious or not. According to “human values are complex”, I’d be making a mistake in doing so. What sort of mistake would I be making?
The situation is as follows: Unlike some conceivable goal-architectures we might choose for artificial intelligence, humans do not have a clearly defined goal. When you ask people on the street what their goals are in life, they usually can’t tell you, and if they do tell you something, they’ll likely revise it as soon as you press them with an extreme thought experiment. Many humans are not agenty. Learning about rationality and thinking about personal goals can turn people into agents. How does this transition happen? The “human values are complex” theory seems to imply that we introspect, find out that we care/have intuitions about 5+ different axes of value, and end up accepting all of them for our goals. This is probably how quite a few people are doing it, but they’re victim of a gigantic typical mind fallacy if they think that’s the only way to do it. Here’s what happened to me personally (and incidentally, to about “20+” agents I know personally and to all the hedonistic utilitarians who are familiar with Lesswrong content and still keep their hedonistic utilitarian goals):
I started out with many things I like (friendship, love, self-actualization, non-repetitiveness, etc) plus some moral intuitions (anti-harm, fairness). I then got interested in ethics and figuring out the best ethical theory. I turned into a moral anti-realist soon, but still wanted to find a theory that incorporates my most fundamental intuitions. I realized that I don’t care intrinsically about “fairness” and became a utiltiarian in terms of my other-regarding/moral values. I then had the decision to what extent I should invest into utilitarianism/altruism, and how much into values that are more about me specifically. I chose altruism, because I have a strong, OCD-like tendency for doing things either fully or not at all, and I thought saving for retirement, eating healthy etc is just as bothersome as trying to be altruistic, because I don’t strongly self-identify with a 100-year-old version of me anyway, so might as well try to make sure that all future sentience will be suffering-free. I still take a lot of care about my long-term happiness and survival, but much less so than if I had the goal to live forever, and as I said I would instantly press the “self-modify into utilitarian robot” button, if there was one. I’d be curious to hear whether I am being “irrational” somewhere, whether there was a step involved that was clearly mistaken. I cannot imagine how that would be the case, and the matter seems obvious to me. So every time I read the link “human values are complex”, it seems like an intellectually dishonest discussion stopper to me.
I get the impression that you’re not well informed about EA and the diverse stances EAs have, and that you’re singling out an idiosyncratic interpretation and giving it an unfair treatment.
The first link you cite talks about public good provision within the current economy. How do you conclude from this that e.g. the effective altruists focused on AI safety are being inefficient? And even if you’re talking about e.g. donations to GiveWell’s recommended charities, how does the first link establish that it’s inefficient? Sick people in Africa usually tend to not be included in calculations about economical common goods, but EAs care about more than just their country’s economy.
FYI, you’re using highly idiosyncratic terminology here. Outside of LW, “utilitarianism” is the name for a family of consequentialist views that also include solely welfare-focused varieties like negative hedonistic utilitarianism or classical hedonistic utilitarianism.
In addition, you repeat the mantra that it’s an objective fact that “human values are complex”. That’s misleading, what’s complex is human moral intuitions. When you define your goal in life, no one forces you to incorporate every single intuition that you have. You may instead choose to regard some of your intuitions as more important than others, and thereby end up with a utility function of low complexity. Your terminal values are not discovered somewhere within you (how would that process work, exactly?), they are chosen. As EY would say, “the buck has to stop somewhere”.
This claim is wrong, only about 5% of the EAs I know are prioritiarians (I have met close to 100 EAs personally). And the link you cite doesn’t support that EAs are prioritarians either, it just argues that you get more QALYs from donating to AMF than from doing other things.
Even less for me.
Thanks for your comment.
Yes, as you stated I was working with the visible sample of EA’s who aren’t focused on existential risk. I feel the term in relation to existential risk is redundant since effective thinking about existential risk on Lesswrong.
The crowding out effect occurs not just as the individual level (which isn’t applicable to individual EA’s given room for more funding consideration), but also at the movement level. Because EA’s act en-bloc, and factor into their considerations ‘what are other people not funding’, they compete the supply a demand for donations against established institutional donors like the Gate’s Foundation. One might wonder then that if that was true, why those Foundations don’t close the funding gaps as a priority—and it looks like someone is trying to answer that here. Admittedly, I haven’t got to reading the article fully but from a quick skim it looks like the magnitude of donations of high impact philanthropists is such it compensates for the ‘ineffectiveness of their cause’, since those charities Givewell recommends have less room for more funding—which becomes a higher order consideration at that scale. The obvious counterexample to this is GiveDirectly, but I wouldn’t be suprised if the reason philanthropists don’t like them is because of fear of setting a precedence (sp?) against productive mutualistic exchange.
I can’t find the original post about the buck stopping after a bit of Googling. I’d like to keep looking into this!
The post I’m referring to is here, but I should note that EY used the phrase in a different context, and my view on terminal values does not reflect his view. My critique of the idea that all human values are complex is that it presupposes too narrow of an interpretation of “values”. Let’s talk about “goals” instead, defined as follows:
I took the definition from this blogpost I wrote a while back. The comment section there contains a long discussion on a similar issue where I elaborate on my view of terminal values.
Anyway, the way my definition of “goals” seems to differ from the interpretation of “values” in the phrase “human values are complex” is that “goals” allow for self-modification. If I could, I would self-modify into a utilitarian super-robot, regardless of whether it was still conscious or not. According to “human values are complex”, I’d be making a mistake in doing so. What sort of mistake would I be making?
The situation is as follows: Unlike some conceivable goal-architectures we might choose for artificial intelligence, humans do not have a clearly defined goal. When you ask people on the street what their goals are in life, they usually can’t tell you, and if they do tell you something, they’ll likely revise it as soon as you press them with an extreme thought experiment. Many humans are not agenty. Learning about rationality and thinking about personal goals can turn people into agents. How does this transition happen? The “human values are complex” theory seems to imply that we introspect, find out that we care/have intuitions about 5+ different axes of value, and end up accepting all of them for our goals. This is probably how quite a few people are doing it, but they’re victim of a gigantic typical mind fallacy if they think that’s the only way to do it. Here’s what happened to me personally (and incidentally, to about “20+” agents I know personally and to all the hedonistic utilitarians who are familiar with Lesswrong content and still keep their hedonistic utilitarian goals):
I started out with many things I like (friendship, love, self-actualization, non-repetitiveness, etc) plus some moral intuitions (anti-harm, fairness). I then got interested in ethics and figuring out the best ethical theory. I turned into a moral anti-realist soon, but still wanted to find a theory that incorporates my most fundamental intuitions. I realized that I don’t care intrinsically about “fairness” and became a utiltiarian in terms of my other-regarding/moral values. I then had the decision to what extent I should invest into utilitarianism/altruism, and how much into values that are more about me specifically. I chose altruism, because I have a strong, OCD-like tendency for doing things either fully or not at all, and I thought saving for retirement, eating healthy etc is just as bothersome as trying to be altruistic, because I don’t strongly self-identify with a 100-year-old version of me anyway, so might as well try to make sure that all future sentience will be suffering-free. I still take a lot of care about my long-term happiness and survival, but much less so than if I had the goal to live forever, and as I said I would instantly press the “self-modify into utilitarian robot” button, if there was one. I’d be curious to hear whether I am being “irrational” somewhere, whether there was a step involved that was clearly mistaken. I cannot imagine how that would be the case, and the matter seems obvious to me. So every time I read the link “human values are complex”, it seems like an intellectually dishonest discussion stopper to me.