I like Effective Altruism a lot—I follow a lot of effective altrusim blogs, I adopt a lot of mental models and tools, I think the idea is great for a lot of people.
I’m highly interested in how to be effective, and I’m highly interested in how to do good, and EA gives some great ideas on both concepts.
That being said, what I’m not interested in as my sole aim is to be maximally effective at doing good. I’m more interested in expressing my values in as large and impactful a way as possible—and in allowing others to do the same. This happens to coincide with doing lots and lots f good, but it definitely doesn’t mean that I would begin to sacrifice my other values (eg fun, peace, expression) to maximize good. I’m interested in allowing others to express THEIR values, even if it means they’re incredibly selfish and do very little good—I suppose this almost begins to sound utilitarian, and I suppose it is—but again, I’m not going to sacrifice appreciable amounts of my own utility if it means more utllity for others, and I don’t expect others to do the same.
In terms of your critique of EA, I think you’ve completely bought into the idea of “revealed preferences”—that people’s utility is revealed in what they want. However, a large portion of psychology research shows something very different—that the behavior people have that gets reinforced is a completey separate “compulsion” pathway than what they enjoy/find happiness from/get fulfilled from, etc.
Economics doesn’t really care about that shit if it doesn’t effect people’s actions, so it’s easier to talk about “revealed preferences.” But as a utilitarian, you should be aware of all the separate pathways that the brain evolved to survive and replicate—many of them separate from happiness, fulfillment, pleasure, and other things which we like to talk about when we talk about “utility’.
The upshot of how all this relates to your points is that the free market/racking up money often hits a bunch of these compulsion pathways through the accumulation of money, but often IGNORES other areas of utility. Givewell is trying to fix the imbalance.
hacking the norm of reciprocity for the evolutionary benefit of future generations
In terms of your critique of EA, I think you’ve completely bought into the idea of “revealed preferences”—that people’s utility is revealed in what they want. However, a large portion of psychology research shows something very different—that the behavior people have that gets reinforced is a completey separate “compulsion” pathway than what they enjoy/find happiness from/get fulfilled from, etc.
Economics doesn’t really care about that shit if it doesn’t effect people’s actions, so it’s easier to talk about “revealed preferences.” But as a utilitarian, you should be aware of all the separate pathways that the brain evolved to survive and replicate—many of them separate from happiness, fulfillment, pleasure, and other things which we like to talk about when we talk about “utility’.
The upshot of how all this relates to your points is that the free market/racking up money often hits a bunch of these compulsion pathways through the accumulation of money, but often IGNORES other areas of utility. Givewell is trying to fix the imbalance.
You know what, you’re lesswrong. I didn’t realise before reading your comment. You’ve completely reframed some of my thinking. Thank you.
I’m going to rebrand myself as an Effective Mutualist!
Then I’m going to get serious and start reading up on how we might otherwise infer what will help others feel happiness other than via their revealed preferences. I still feel compelled to help others, beyond that which will materially benefit me or society in the long term (my thinking is that, if everyone where more mutualistic, then over the long term the more parasitic people would die off).
edit 1: The left wing tries to abolish poverty, the right tries to abolish bureaucracy. Perhaps there’s some innate psychological divide between people who try to get rid of social problems immediately, and those who want to do it sustainably.
That being said, what I’m not interested in as my sole aim is to be maximally effective at doing good. I’m more interested in expressing my values in as large and impactful a way as possible—and in allowing others to do the same. This happens to coincide with doing lots and lots f good, but it definitely doesn’t mean that I would begin to sacrifice my other values (eg fun, peace, expression) to maximize good.
It’s interesting to ask to what extent this is true of everyone—I think we’ve discussed this before Matt.
Your version and phrasing of what you’re interested in is particular to you, but we could broaden the question out to ask how far people have gone a long way moving away from having primarily self-centred drives which overwhelm others when significant self-sacrifice is on the table. I think some people have gone a long way moving away from that, but I’m sceptical that any single human being goes the full distance. Most EAs plausibly don’t make any significant self-sacrifices if measured in terms of their happiness significantly dipping.* The people I know who have gone the furthest may be Joey and Kate Savoie, with whom I’ve talked about these issues a lot.
* Which doesn’t mean they haven’t done a lot of good! If people can donate 5% or 10% or 20% of their income without becoming significantly less happy then that’s great, and convincing people to do that is a low hanging fruit that we should prioritise, rather than focusing our energies on then squeezing out extra sacrifices that start to really eat into their happiness. The good consequences of people donating are what we really care about after all, not the level of sacrifice they themselves are making.
Yes, I think in terms of my actions, I’m probably similar to many effective altruists. There are routes that I wouldn’t consider, such as earning to give, but all in all I’m probably on a similar path with many other EA’s who want to get into tech entrepreneurship.
I think where I differ is not in my actions, but in my moral aims. Many EA’s, if given a pill that could make them be able to work all day on helping others, sustainably, without changing their enjoyment of said activities, would think they ought to take it—and a sizeable portion probably would take it. I’d never take that pill, and wouldn’t feel bad about that choice.
I like Effective Altruism a lot—I follow a lot of effective altrusim blogs, I adopt a lot of mental models and tools, I think the idea is great for a lot of people.
I’m highly interested in how to be effective, and I’m highly interested in how to do good, and EA gives some great ideas on both concepts.
That being said, what I’m not interested in as my sole aim is to be maximally effective at doing good. I’m more interested in expressing my values in as large and impactful a way as possible—and in allowing others to do the same. This happens to coincide with doing lots and lots f good, but it definitely doesn’t mean that I would begin to sacrifice my other values (eg fun, peace, expression) to maximize good. I’m interested in allowing others to express THEIR values, even if it means they’re incredibly selfish and do very little good—I suppose this almost begins to sound utilitarian, and I suppose it is—but again, I’m not going to sacrifice appreciable amounts of my own utility if it means more utllity for others, and I don’t expect others to do the same.
In terms of your critique of EA, I think you’ve completely bought into the idea of “revealed preferences”—that people’s utility is revealed in what they want. However, a large portion of psychology research shows something very different—that the behavior people have that gets reinforced is a completey separate “compulsion” pathway than what they enjoy/find happiness from/get fulfilled from, etc.
Economics doesn’t really care about that shit if it doesn’t effect people’s actions, so it’s easier to talk about “revealed preferences.” But as a utilitarian, you should be aware of all the separate pathways that the brain evolved to survive and replicate—many of them separate from happiness, fulfillment, pleasure, and other things which we like to talk about when we talk about “utility’.
The upshot of how all this relates to your points is that the free market/racking up money often hits a bunch of these compulsion pathways through the accumulation of money, but often IGNORES other areas of utility. Givewell is trying to fix the imbalance.
hacking the norm of reciprocity for the evolutionary benefit of future generations
You know what, you’re lesswrong. I didn’t realise before reading your comment. You’ve completely reframed some of my thinking. Thank you.
I’m going to rebrand myself as an Effective Mutualist!
Then I’m going to get serious and start reading up on how we might otherwise infer what will help others feel happiness other than via their revealed preferences. I still feel compelled to help others, beyond that which will materially benefit me or society in the long term (my thinking is that, if everyone where more mutualistic, then over the long term the more parasitic people would die off).
edit 1: The left wing tries to abolish poverty, the right tries to abolish bureaucracy. Perhaps there’s some innate psychological divide between people who try to get rid of social problems immediately, and those who want to do it sustainably.
(Upvoted for willingness to change your mind.)
It’s interesting to ask to what extent this is true of everyone—I think we’ve discussed this before Matt.
Your version and phrasing of what you’re interested in is particular to you, but we could broaden the question out to ask how far people have gone a long way moving away from having primarily self-centred drives which overwhelm others when significant self-sacrifice is on the table. I think some people have gone a long way moving away from that, but I’m sceptical that any single human being goes the full distance. Most EAs plausibly don’t make any significant self-sacrifices if measured in terms of their happiness significantly dipping.* The people I know who have gone the furthest may be Joey and Kate Savoie, with whom I’ve talked about these issues a lot.
* Which doesn’t mean they haven’t done a lot of good! If people can donate 5% or 10% or 20% of their income without becoming significantly less happy then that’s great, and convincing people to do that is a low hanging fruit that we should prioritise, rather than focusing our energies on then squeezing out extra sacrifices that start to really eat into their happiness. The good consequences of people donating are what we really care about after all, not the level of sacrifice they themselves are making.
Yes, I think in terms of my actions, I’m probably similar to many effective altruists. There are routes that I wouldn’t consider, such as earning to give, but all in all I’m probably on a similar path with many other EA’s who want to get into tech entrepreneurship.
I think where I differ is not in my actions, but in my moral aims. Many EA’s, if given a pill that could make them be able to work all day on helping others, sustainably, without changing their enjoyment of said activities, would think they ought to take it—and a sizeable portion probably would take it. I’d never take that pill, and wouldn’t feel bad about that choice.