First: I agree with your broad point that more segmentation in the EA community would be helpful. I don’t think we disagree as much as you think we do, and in fact I would categorize this post as part of the “being introspective about being a movement” that I’m advocating. So perhaps I’m accusing you of failure to be meta-contrarian :P
We need to accept, as EA’s, what Lesswrong as blog has accepted, there will always be a group that is passive, and feeling-oriented, and a group that is outcome-oriented. Even if the subject matter of Effective Altruism is outcome.
I really appreciate this point and it’s something that didn’t occur to me. Naively it seems strange to me that people would look to feeling-oriented activity in effective altruism, which is pretty much explicitly about disregarding feeling orientation; but on reflection, seeming strange is not much of a reason why it shouldn’t be true, whereas in fact this is obviously happening.
I think you understand this, but under this framework, my objection might be something like: the feeling-oriented people think they’re outcome-oriented, or have to signal outcome-orientation in order to fit in and satisfy their need for feeling-oriented interaction. (This is where my epithet of “pretending” comes in; perhaps “signalling trying” is more appropriate.) Having feeling-oriented people signalling outcome-orientation is stopping the outcome-oriented people from pushing forward the EA state of the art, because it adds epistemic inertia (feeling-oriented people have less belief pressure from truth-seeking and more from social satisficing).
it is false that the entire movement is a single monolithic entity making wrong and right decisions in a void
I understand this. However, the apostasy was against associating with EA as a movement. If I say “it’s a problem that EA doesn’t do X” I mean “it’s a problem that nobody within EA has caused X to come about/social norms are not set up to encourage X/etc.” For whatever reason, X has failed to happen, and from the point of view of an apostate, I don’t really care whether it was because EA monolithically decided not to do X or because every individual decided not to or because there was some failure of incentives going on. That’s an implementation detail of the solution, not a feature of the critique.
Each one of us is also not one single monolithic agent. We have good and bad days, and we are made of lots of tiny little agents within, whose goals and purposes are only our own when enough of them coalesce so that our overall behavior goes in a certain direction. Just like you can’t critize EA as a whole for something that it’s subsets haven’t done (the fancy philosophers word for this is mereological fallacy), likewise you can’t claim about a particular individual that he, as a whole, pretends to try, because you’ve seen him have one or two lazy days, or if he is still addicted to a particular video game.
True. But if you see people making large life decisions that look like they’re pretending to try (e.g. satisficing on donations or career choice), this should be a red flag. This isn’t the kind of decision you make on one bad day (at least, I hope not).
Ben’s post makes the case for really effective altruism too demanding. Not even inside we are truly and really a monolithic entity, or a utility function optimizer—regardless of how much we may wish we were. My favoured reading of the current state of the Effective Altruist people is not that they are pretending to really try, but that most people are finding, for themselves, which are the aspects of their personalities they are willing to bend for altruism, and which they are not.
If people are “finding” this, they are only finding it intuitively—the fact that it’s a personal thing is not always rising to the level of conscious awareness. I would be pretty OK with people finding out which aspects they aren’t willing to bend and saying e.g. “oh, I satisficed on donations because of analysis paralysis”, but instead what comes out is a huge debate that feels like it’s truth-seeking, but actually people have a personal stake in it.
The Effective Altruist community does not need to get introspectively even more focused on effectiveness
I agree with what I think you mean, although I’m not quite sure what you mean by “effectiveness” in this case. EA needs to get more introspectively focused on directly being effective. It needs to get more introspectively focused on being a movement that can last a long time, while maintaining (acquiring?) the ability to come to non-trivial true beliefs about what is effective. This largely does not consist of us directly trying to solve the problem of “how do we, the EA movement, become better utilitarians?” but rather sub-questions like “how do we build good dialogue” and “how do we
And last, a couple technical issues (which I don’t think affect either of our main points very much):
He used the fact that that critique had not been written, as sufficiently strong indication that the whole movement, a monolithic, single entity, had failed it’s task of being introspective enough about it’s failure modes. This is unfair on two accounts, someone had to be the first (at some cost to all of us), and the movement seems young enough that that is not a problem, and it is false that the entire movement is a single monolithic entity making wrong and right decisions in a void. The truth is that there are many people in the EA community in different stages of life, and of involvement with the movement. We should account for that and make room for newcomers as well as for ancient sages. EA is not one single entity that made one huge mistake. It is a couple thousand people, whose subgroups are working hard on several distinct things, frequently without communicating, and whose supergoal is reborn every day with the pushes and drifts going on inside the community.
This is wrong: I cited a lot of other evidence that effective altruists were stopping thinking too early, including not working very hard on population ethics, satisficing on cause choice, not caring about historical outside views, not caring about diversity, not making a good case for their edge over e.g. the Gates Foundation, and having an inconsistent attitude towards rigor. The organization of the conclusion made the “no critiques” argument seem more central than I intended (which was my bad), but it’s definitely not the crux of the argument (and has been partially rebutted by Will MacAskill on Facebook anyway, where he brought up several instances that this has happened in private).
EDIT: Diego removed the parenthetical remark; the following no longer applies. (How do I strikethrough?)
(at some cost to all of us)
I would prefer that you turn this, which reads to me like a pot-shot, into a substantive criticism or discarded it. On Facebook you said, ”...I really don’t think the text does good, and I strongly fear some of its possible aftereffects,” but you haven’t elaborated very much. I’d very much like to hear your objections—you can private message me if you think they’re sensitive.
if you see people making large life decisions that look like they’re pretending to try (e.g. satisficing on donations or career choice), this should be a red flag.
I don’t think this is as bad as it looks. An underrated benefit to pretending to try is that those who pretend to try still often do more good than they would if they didn’t pretend at all.
Before I encountered EA, I wanted to be a college professor. After I encountered EA and was convinced by 80K-style career choice, I “pretended to try” (subconsciously, without realizing it) by finding EA arguments for why it was optimal to be a college professor (pay is decent, great opportunity to influence, etc.). Of course, this wasn’t really my EA-optimal career path. But it was a whole lot better than it was before I considered EA (because I was aiming to influence when before I was not, because I was planning on donating ~30% of my expected salary when before I was not going to donate anything, etc.). Definitely not EA-optimal, but significantly better.
Additionally, many people are willing to give up some things, but not all things. Once I noticed to myself that I was merely pretending, I thought to myself that maybe I should just be comfortable ignoring EA considerations when it came to careers and make sure I did something I wanted. Noted EA superstar Julia Wise has this kind of career—she could do much better money wise, if only she was willing to sacrifice more than she’s willing to give up.
Of course, now I think I am on an EA-optimal career path that doesn’t involve pretending (heading towards web development), so things did turn out ok. But only after pretending for awhile.
Yes, I noted throughout the post that pretending to actually try gets you farther than following social defaults because it rules out a bunch of ideas that obviously conflict with EA principles. I still think it’s quite bad in the sense of adding epistemic inertia.
if you see people making large life decisions that look like they’re pretending to try (e.g. satisficing on donations or career choice), this should be a red flag.
This seems to expose a bit of a tension between two possible goals for the EA movement. One of them is “We need more rigor and people to be trying harder!” and the other one is “We need to bring in lots of people who aren’t trying quite as hard; we can have a bigger total impact by getting lots of people to do just a little bit.” The second one is closer to what e.g. Peter Singer is trying to do, by setting a low baseline threshold of doing good and trying to make it the norm for as many people as possible to do good at that threshold.
Is it actually that bad to have people in the movement who are just doing good because of social pressure? If we make sure that the things we’re socially pressuring people to do are actually very good things, then that could be good anyway. Even if they’re just satisficing, this could be a net good, as long as we’re raising the threshold of what “satisficing” is by a lot.
I guess the potential problem there is that maybe if satisficing is the norm, we’ll encourage complacency, and thereby get fewer people who are actually trying really hard instead of just satisficing. Maybe it’s just a balancing act.
Having feeling-oriented people signalling outcome-orientation is stopping the outcome-oriented people from pushing forward the EA state of the art, because it adds epistemic inertia (feeling-oriented people have less belief pressure from truth-seeking and more from social satisficing).
I’m not sure I understand you here. Are you saying that because feeling-oriented people will pretty much believe what they are socially pressured to believe, the outcome-oriented people will also stop truth-seeking?
On your last point: there are two equally interesting claims that fit Ben’s comment:
1) above a certain proportion, or threshold, feeling oriented altruists may reinforce/feed only themselves, thus setting the bar on EA competence really, really low (yes lurker, you are as good as Toby, who donates most of his income and is counterfactually responsible for 80k and GWWC), I addressed this below. 2) Outcome-oriented people may fail to create signal among the noise (if they don’t create the AEA facebook group) or succumb more frequently to drifting back into their feeeling-oriented self, getting hedons for small amounts of utilons.
2) is what I’m more concerned about. I don’t really mind if there are a bunch of people being feeling-oriented if they don’t damage the outcome-orientation of the outcome-oriented group at all. (But I think that counterfactual is so implausible as to be difficult to imagine in detail.) To the extend that they do, as Maia said, it’s a trade-off.
Would you be as kind as to create the Advancing Effective Altruism Facebook group Ben? (or adding me to it in case the secret society of Effective Altruists is in the ninth generation, and I’m just making a fool of my self here to the arcane sages) I can help you invite people and create a description. I can’t administer it, or choose who belongs or not, since I’m too worried about more primeval things, like going to Berkeley, surviving finantially, making sure www.ierfh.org remains active when I leave Brazil despite our not having any funding; and I don’t know everyone within EA as, say, Carl does.
I don’t think Facebook is a good forum for productive discussion even among more outcome-oriented people. See my post and Brian Tomasik’s remarks for why.
Solving the problem of having a good discussion forum is hard and requires more bandwidth than I have right now, though Ozzie Gooen might know about projects heading in that direction. I think continuing to use LW discussion would be preferable to Facebook.
I don’t hold the belief that Lesswrong discussion contains only people who are at the cutting edge of EA theory. I don’t think you do either. That solution does not apply to a problem that you and I, more than anyone, agree is happening. We do not have an ultimate forum that buzzes top notch outcome-oriented altruists to think about specific things that others in the same class have thought of.
A moderator in such community should save all discussions, so they are formal in character and eternalized in a website. But certainly the buzzin effect which facebook and emails have (and only facebook and emails have) is a necessary component, as long as the group is restricted only to top theorists. Since no one cares about this more than you and I, I am asking you to do it, I wouldn’t even know who to invite besides those I cited in the post.
If you really think that one of the main problems with EA at the moment is absence of a space for outcome-oriented individuals to rock our world, and communicate without the chaotic blog proliferation that has been going on, I can hardly think your bandwidth would be better invested elsewhere (the same goes for you, dear Effective Altruist reading this post).
I don’t hold the belief that Lesswrong discussion contains only people who are at the cutting edge of EA theory. I don’t think you do either.
Non sequitur. I don’t hold this belief but I nevertheless think that Less Wrong would be better than Facebook for pushing forward the state of EA. Reasons include upvotes, searchability and preservation with less headache for the moderator, threading, etc.
I can hardly think your bandwidth would be better invested elsewhere
You just gave me some great reasons why your own bandwidth is better invested elsewhere. I’m surprised that you can’t think of analogous ones that might apply to me right now.
Anyway, I think this is an important problem but not the important problem (as you seem to think I think), and also one that I have quite a large comparative disadvantage in solving correctly compared to other people, and other important problems that I could solve. If no one else decides it’s worthy of their time I’ll (a) update against it being worthwhile and (b) get around to it when I have free bandwidth.
So that other kinds of minds can comment, I’ll try to be brief for now, and suggest we carry on this one on one threat in a couple days, so that others don’t feel disencouraged to come up with ideas neither of us has thought of yet.
For the same reason I don’t address your technical points. But I praise you for responding promptly and in an uplifting mood.
Signaling wars: History shows that signaling wars between classes are not so bad. The high class—of outcome - (in this case, say, Will MacAskill belongs to it) does not need to signal it belongs to the high class. They are known to belong. The middle class, people who are sort of effective altruists may want to signal they belong to the high class. Finally there are people who are in the feeling-oriented only class, who are lurking and don’t need to signal that much, when they do, it is obvious, and not dangerous, it is like when a 17 year old decides he solved quantum physics, and writes a paper about it. The world keeps turning. So the main worry would be the middle class trying to signal being more altruistic then they are. My hope is that either they will end up influencing people anyway by their signal attempt, or else they will, unbeknownst to themselves fake it till they make it, slowly becoming more altruistic. I mean, it feels good to actually do good, so over time, given a division of labour with asymmetric diffusion of information from the outcome-oriented to the feeling-oriented, I expect the signalling wars not to be a big problem. I do however agree that it is a problem and invite others to think about it.
Large decisions don’t come from the day’s mood: In large, I think they don’t (despite those papers about how absurdly small things can get people to subscribe to completely unrelated causes), so I agree with you. What I want to enforce is that we are composed of many tiny homunculi not only in the time dimension, but across all dimensions. Maybe 70% of someone decided for EA, but the part that didn’t wants to be a photographer of nature, I think saying that person has failed the EA ethos would be throwing the baby with the bathwater. Just like it would not throwing them out of the “Advancing Effective Altruism” closed facebook group.
First: I agree with your broad point that more segmentation in the EA community would be helpful. I don’t think we disagree as much as you think we do, and in fact I would categorize this post as part of the “being introspective about being a movement” that I’m advocating. So perhaps I’m accusing you of failure to be meta-contrarian :P
I really appreciate this point and it’s something that didn’t occur to me. Naively it seems strange to me that people would look to feeling-oriented activity in effective altruism, which is pretty much explicitly about disregarding feeling orientation; but on reflection, seeming strange is not much of a reason why it shouldn’t be true, whereas in fact this is obviously happening.
I think you understand this, but under this framework, my objection might be something like: the feeling-oriented people think they’re outcome-oriented, or have to signal outcome-orientation in order to fit in and satisfy their need for feeling-oriented interaction. (This is where my epithet of “pretending” comes in; perhaps “signalling trying” is more appropriate.) Having feeling-oriented people signalling outcome-orientation is stopping the outcome-oriented people from pushing forward the EA state of the art, because it adds epistemic inertia (feeling-oriented people have less belief pressure from truth-seeking and more from social satisficing).
I understand this. However, the apostasy was against associating with EA as a movement. If I say “it’s a problem that EA doesn’t do X” I mean “it’s a problem that nobody within EA has caused X to come about/social norms are not set up to encourage X/etc.” For whatever reason, X has failed to happen, and from the point of view of an apostate, I don’t really care whether it was because EA monolithically decided not to do X or because every individual decided not to or because there was some failure of incentives going on. That’s an implementation detail of the solution, not a feature of the critique.
True. But if you see people making large life decisions that look like they’re pretending to try (e.g. satisficing on donations or career choice), this should be a red flag. This isn’t the kind of decision you make on one bad day (at least, I hope not).
If people are “finding” this, they are only finding it intuitively—the fact that it’s a personal thing is not always rising to the level of conscious awareness. I would be pretty OK with people finding out which aspects they aren’t willing to bend and saying e.g. “oh, I satisficed on donations because of analysis paralysis”, but instead what comes out is a huge debate that feels like it’s truth-seeking, but actually people have a personal stake in it.
I agree with what I think you mean, although I’m not quite sure what you mean by “effectiveness” in this case. EA needs to get more introspectively focused on directly being effective. It needs to get more introspectively focused on being a movement that can last a long time, while maintaining (acquiring?) the ability to come to non-trivial true beliefs about what is effective. This largely does not consist of us directly trying to solve the problem of “how do we, the EA movement, become better utilitarians?” but rather sub-questions like “how do we build good dialogue” and “how do we
And last, a couple technical issues (which I don’t think affect either of our main points very much):
This is wrong: I cited a lot of other evidence that effective altruists were stopping thinking too early, including not working very hard on population ethics, satisficing on cause choice, not caring about historical outside views, not caring about diversity, not making a good case for their edge over e.g. the Gates Foundation, and having an inconsistent attitude towards rigor. The organization of the conclusion made the “no critiques” argument seem more central than I intended (which was my bad), but it’s definitely not the crux of the argument (and has been partially rebutted by Will MacAskill on Facebook anyway, where he brought up several instances that this has happened in private).
EDIT: Diego removed the parenthetical remark; the following no longer applies. (How do I strikethrough?)
I would prefer that you turn this, which reads to me like a pot-shot, into a substantive criticism or discarded it. On Facebook you said, ”...I really don’t think the text does good, and I strongly fear some of its possible aftereffects,” but you haven’t elaborated very much. I’d very much like to hear your objections—you can private message me if you think they’re sensitive.
I don’t think this is as bad as it looks. An underrated benefit to pretending to try is that those who pretend to try still often do more good than they would if they didn’t pretend at all.
Before I encountered EA, I wanted to be a college professor. After I encountered EA and was convinced by 80K-style career choice, I “pretended to try” (subconsciously, without realizing it) by finding EA arguments for why it was optimal to be a college professor (pay is decent, great opportunity to influence, etc.). Of course, this wasn’t really my EA-optimal career path. But it was a whole lot better than it was before I considered EA (because I was aiming to influence when before I was not, because I was planning on donating ~30% of my expected salary when before I was not going to donate anything, etc.). Definitely not EA-optimal, but significantly better.
Additionally, many people are willing to give up some things, but not all things. Once I noticed to myself that I was merely pretending, I thought to myself that maybe I should just be comfortable ignoring EA considerations when it came to careers and make sure I did something I wanted. Noted EA superstar Julia Wise has this kind of career—she could do much better money wise, if only she was willing to sacrifice more than she’s willing to give up.
Of course, now I think I am on an EA-optimal career path that doesn’t involve pretending (heading towards web development), so things did turn out ok. But only after pretending for awhile.
Yes, I noted throughout the post that pretending to actually try gets you farther than following social defaults because it rules out a bunch of ideas that obviously conflict with EA principles. I still think it’s quite bad in the sense of adding epistemic inertia.
This seems to expose a bit of a tension between two possible goals for the EA movement. One of them is “We need more rigor and people to be trying harder!” and the other one is “We need to bring in lots of people who aren’t trying quite as hard; we can have a bigger total impact by getting lots of people to do just a little bit.” The second one is closer to what e.g. Peter Singer is trying to do, by setting a low baseline threshold of doing good and trying to make it the norm for as many people as possible to do good at that threshold.
Is it actually that bad to have people in the movement who are just doing good because of social pressure? If we make sure that the things we’re socially pressuring people to do are actually very good things, then that could be good anyway. Even if they’re just satisficing, this could be a net good, as long as we’re raising the threshold of what “satisficing” is by a lot.
I guess the potential problem there is that maybe if satisficing is the norm, we’ll encourage complacency, and thereby get fewer people who are actually trying really hard instead of just satisficing. Maybe it’s just a balancing act.
I’m not sure I understand you here. Are you saying that because feeling-oriented people will pretty much believe what they are socially pressured to believe, the outcome-oriented people will also stop truth-seeking?
On your last point: there are two equally interesting claims that fit Ben’s comment: 1) above a certain proportion, or threshold, feeling oriented altruists may reinforce/feed only themselves, thus setting the bar on EA competence really, really low (yes lurker, you are as good as Toby, who donates most of his income and is counterfactually responsible for 80k and GWWC), I addressed this below.
2) Outcome-oriented people may fail to create signal among the noise (if they don’t create the AEA facebook group) or succumb more frequently to drifting back into their feeeling-oriented self, getting hedons for small amounts of utilons.
2) is what I’m more concerned about. I don’t really mind if there are a bunch of people being feeling-oriented if they don’t damage the outcome-orientation of the outcome-oriented group at all. (But I think that counterfactual is so implausible as to be difficult to imagine in detail.) To the extend that they do, as Maia said, it’s a trade-off.
Would you be as kind as to create the Advancing Effective Altruism Facebook group Ben? (or adding me to it in case the secret society of Effective Altruists is in the ninth generation, and I’m just making a fool of my self here to the arcane sages) I can help you invite people and create a description. I can’t administer it, or choose who belongs or not, since I’m too worried about more primeval things, like going to Berkeley, surviving finantially, making sure www.ierfh.org remains active when I leave Brazil despite our not having any funding; and I don’t know everyone within EA as, say, Carl does.
I don’t think Facebook is a good forum for productive discussion even among more outcome-oriented people. See my post and Brian Tomasik’s remarks for why.
Solving the problem of having a good discussion forum is hard and requires more bandwidth than I have right now, though Ozzie Gooen might know about projects heading in that direction. I think continuing to use LW discussion would be preferable to Facebook.
I don’t hold the belief that Lesswrong discussion contains only people who are at the cutting edge of EA theory. I don’t think you do either. That solution does not apply to a problem that you and I, more than anyone, agree is happening. We do not have an ultimate forum that buzzes top notch outcome-oriented altruists to think about specific things that others in the same class have thought of.
A moderator in such community should save all discussions, so they are formal in character and eternalized in a website. But certainly the buzzin effect which facebook and emails have (and only facebook and emails have) is a necessary component, as long as the group is restricted only to top theorists. Since no one cares about this more than you and I, I am asking you to do it, I wouldn’t even know who to invite besides those I cited in the post.
If you really think that one of the main problems with EA at the moment is absence of a space for outcome-oriented individuals to rock our world, and communicate without the chaotic blog proliferation that has been going on, I can hardly think your bandwidth would be better invested elsewhere (the same goes for you, dear Effective Altruist reading this post).
Non sequitur. I don’t hold this belief but I nevertheless think that Less Wrong would be better than Facebook for pushing forward the state of EA. Reasons include upvotes, searchability and preservation with less headache for the moderator, threading, etc.
You just gave me some great reasons why your own bandwidth is better invested elsewhere. I’m surprised that you can’t think of analogous ones that might apply to me right now.
Anyway, I think this is an important problem but not the important problem (as you seem to think I think), and also one that I have quite a large comparative disadvantage in solving correctly compared to other people, and other important problems that I could solve. If no one else decides it’s worthy of their time I’ll (a) update against it being worthwhile and (b) get around to it when I have free bandwidth.
So that other kinds of minds can comment, I’ll try to be brief for now, and suggest we carry on this one on one threat in a couple days, so that others don’t feel disencouraged to come up with ideas neither of us has thought of yet.
For the same reason I don’t address your technical points. But I praise you for responding promptly and in an uplifting mood.
Signaling wars: History shows that signaling wars between classes are not so bad. The high class—of outcome - (in this case, say, Will MacAskill belongs to it) does not need to signal it belongs to the high class. They are known to belong. The middle class, people who are sort of effective altruists may want to signal they belong to the high class. Finally there are people who are in the feeling-oriented only class, who are lurking and don’t need to signal that much, when they do, it is obvious, and not dangerous, it is like when a 17 year old decides he solved quantum physics, and writes a paper about it. The world keeps turning. So the main worry would be the middle class trying to signal being more altruistic then they are. My hope is that either they will end up influencing people anyway by their signal attempt, or else they will, unbeknownst to themselves fake it till they make it, slowly becoming more altruistic. I mean, it feels good to actually do good, so over time, given a division of labour with asymmetric diffusion of information from the outcome-oriented to the feeling-oriented, I expect the signalling wars not to be a big problem. I do however agree that it is a problem and invite others to think about it.
Large decisions don’t come from the day’s mood: In large, I think they don’t (despite those papers about how absurdly small things can get people to subscribe to completely unrelated causes), so I agree with you. What I want to enforce is that we are composed of many tiny homunculi not only in the time dimension, but across all dimensions. Maybe 70% of someone decided for EA, but the part that didn’t wants to be a photographer of nature, I think saying that person has failed the EA ethos would be throwing the baby with the bathwater. Just like it would not throwing them out of the “Advancing Effective Altruism” closed facebook group.