I don’t recall Givewell ever claiming to be unbiased, although it is a problem for other actors in EA to treat Givewell’s cost-effectiveness estimates as unbiased.
Pretty weird that restating a bunch of things GiveWell says gets construed as an attack on GiveWell (rather than the people distorting what it says), and that people keep forgetting or not noticing those things, in directions that make giving based on GiveWell’s recommendations seem like a better deal than it is. Why do you suppose that is?
I believe it’s because people get their identities very caught up in EA, and for EAs focused on global poverty alleviation, Givewell and their recommended charities. So, when someone like you criticizes Givewell, a lot of them react in primarily emotional ways, creating a noisy space where the sound of messages like yours get lost. So, the points you’re trying to make about Givewell, or what similar points many others have tried making about Givewell, don’t stick to enough for enough of the EA community, or whoever else the relevant groups of people are. Thus, in the collective memory of the community, these things are forgotten or not noticed. Then, the cycle repeats itself each time you write another post like this.
So, EA largely isn’t about actually doing altruism effectively (which requires having correct information about what things actually work, e.g. estimates of cost per life saved, and not adding noise to conversations about these), it’s an aesthetic identity movement around GiveWell as a central node, similar to e.g. most popular environmentalism (which, for example, opposes nuclear power despite it being good for the environment, because nuclear power is discordant with the environmentalism identity/aesthetics, and Greenpeace is against it), which is also claiming credit for, literally, evaluating and acting towards the moral good (as environmentalism claims credit for evaluating and acting towards the health of the planet). This makes sense as an explanation of the sociological phenomenon, and also implies that, according to the stated values of EA, EA-as-it-is ought to be replaced with something very, very different.
[EDIT: noting that what you said in another comment also agrees with the aesthetic identity movement view: “I imagine the CEA does it through some combo of revering Singer, thinking it’s good for optics, and not thinking the level of precision at which there error is taking place is so grievous as to be objectionable in the context they’re presented in.”]
I agree with your analysis of the situation, but I wonder whether it’s possible to replace EA with anything that won’t turn into exactly the same thing. After all, the EA movement is the result of some people noticing that much of existing charity is like this, and saying “we should replace that with something very, very different”…
And EA did better than the previous things, along some important dimensions! And people attempting to do the next thing will have EA as an example to learn from, which will (hopefully) prompt them to read and understand sociology, game theory, etc. The question of “why do so many things turn into aesthetic identity movements” is an interesting and important one, and, through study of this (and related) questions, it seems quite tractable to have a much better shot at creating something that produces long-term value, than by not studying those questions.
Success is nowhere near guaranteed, and total success is quite unlikely, but, trying again (after a lot of study and reflection) seems like a better plan than just continuing to keep the current thing running.
The question of “why do so many things turn into aesthetic identity movements” is an interesting and important one, and, through study of this (and related) questions, it seems quite tractable to have a much better shot at creating something that produces long-term value, than by not studying those questions.
I agree that studying this is quite important. (If, of course, such an endeavor is entered into with the understanding that everyone around the investigators, and indeed the investigators themselves, have an interest in subverting the investigation. The level of epistemic vigilance required for the task is very unusually high.)
It is not obvious to me that further attempts at successfully building the object-level structure (or even defining the object-level structure) are warranted, prior to having substantially advanced our knowledge on the topic of the above question. (It seems like you may already agree with me, on this; I am not sure if I’m interpreting your comment correctly.)
I’m going to flip this comment on you, so you can understand how I’m seeing it, and thus I fail to see why the point you’re trying to make matters.
So, rationality largely isn’t actually about doing thinking clearly (which requires having correct information about what things actually work, e.g., well-calibrated priors, and not adding noise to conversations about these), it’s an aesthetic identify movement around HPMoR as a central node, similar to, e.g., most popular environmentalism (which, for example, opposes nuclear power despite it being good for the environment, because nuclear power is discordant with the environmentalism identity/aesthetics, and Greenpeace is against it). This makes sense as an explanation of the sociological phenomenon, and also implies that, according to the stated value of rationality, rationality-as-it-is ought to be replaced with something very, very different.
One could nitpick about how HPMoR has done much more to save a number of lives through AI alignment than Givewell has ever done through developing-world interventions, and I’ll go share that info as from Jessica Taylor in defence of (at least some of) what Ben Hoffman is trying to achieve, perhaps among other places on the public internet, and we’ll see how that goes. The point I was trying to make is that much of the rationality community has nothing to do with the community’s stated values. So, in stating as though a fact about EA your personal impression of it based on Sarah’s blog post as if that means something unique about EA that isn’t true about other human communities, you’ve argued for too much.
Also, in this comment I indicated my awareness of what was once known as the “Vassar crowd”, which I recall you were a part of:
Shall I point out to all the communities of x-risk reduction, long-term world improvement, EA, and rationality that Michael Arc/Vassar and some of his friends formed a “Vassar crowd” that formed a cell aimed at unilaterally driving a wedge between x-risk/rationality and EA, which included you, Sarah Constantin, Michael Arc, Alyssa Vance, among others? Should I not hold you or Michael Arc individually responsible for the things you’ve done since then that have caused you to have a mixed reputation, or should I castigate all of you and Michael’s friends in the bunch too, along with as much of the rationality community as I feel like? After all, you’re all friends, and you decided to make the effort together, even though you each made your own individual contributions.
While we’re here, would you mind explaining with me what all of your beef was with the EA community as misleading in myriad ways to the point of menacing x-risk reduction efforts, and other pursuits of what is true and good, without applying the same pressure to parts of the rationality community that pose the same threat, or for that matter, any other group of people who does the same? What makes EA special?
So, rationality largely isn’t actually about doing thinking clearly [...] it’s an aesthetic identity movement around HPMoR as a central node [...] This makes sense as an explanation of the sociological phenomenon, and also implies that, according to the stated value of rationality, rationality-as-it-is ought to be replaced with something very, very different.
This just seems obviously correct to me, and I think my failure to properly integrate this perspective until very recently has been extremely bad for my sanity and emotional well-being.
Specifically: if you fail to make a hard mental disinction between “rationality”-the-æsthetic-identity-movement and rationality-the-true-art-of-systematically-correct-reasoning, then finding yourself in a persistent disagreement with so-called “rationalists” about something sufficiently basic-seeming creates an enormous amount of cognitive dissonance (“Am I crazy? Are they crazy? What’s going on?? Auuuuuugh”) in a way that disagreeing with, say, secular humanists or arbitrary University of Chicago graduates, doesn’t.
But … it shouldn’t. Sure, self-identification with the “rationalist” brand name is a signal that someone knows some things about how to reason. And, so is graduating from the University of Chicago. How strong is each signal? Well, that’s an empirical question that you can’t answer by taking the brand name literally.
How can you improve your conception of rationality? Not by saying to yourself, “It is my duty to be rational.” By this you only enshrine your mistaken conception. Perhaps your conception of rationality is that it is rational to believe the words of the Great Teacher, and the Great Teacher says, “The sky is green,” and you look up at the sky and see blue. If you think: “It may look like the sky is blue, but rationality is to believe the words of the Great Teacher,” you lose a chance to discover your mistake.
Do not ask whether it is “the Way” to do this or that. Ask whether the sky is blue or green. If you speak overmuch of the Way you will not attain it.
Of course, not everyone is stupid enough to make the mistake I made—I may have been unusually delusional in the extent to which I expected “the community” to live up to the ideals expressed in our marketing literature. For an example of someone being less stupid than recent-past-me, see the immortal Scott Alexander’s comments in “The Ideology Is Not the Movement” (“[...] a tribe much like the Sunni or Shia that started off with some pre-existing differences, found a rallying flag, and then developed a culture”).
This isn’t to say that the so-called “rationalist” community is bad, by the standards of our world. This is my æsthetic identity movement, too, and I don’t see any better community to run away to—at the moment. (Though I’m keeping an eye on the Quillette people.) But if attempts to analyze how we’re collectively failing to live up to our ideals are construed as an attack, that just makes us even worse than we already are at living up to our own ideals!
(Full disclosure: uh, I guess I would also count as part of the “Vassar crowd” these days??)
But if attempts to analyze how we’re collectively failing to live up to our ideals are construed as an attack, that just makes us even worse than we already are at living up to our own ideals!
For Ben’s criticisms of EA, it’s my opinion that while I agree with many of his conclusions, I don’t agree with some of the strongest conclusions he reaches, or how he makes the arguments for them, simply because I believe they are not good arguments. This is common for interactions between EA and Ben these days, though Ben doesn’t respond to counter-arguments, as he often seems under the impression a counter-argument disagrees with Ben in a way he doesn’t himself agree with, his interlocutors are persistently acting in bad faith. I haven’t interacted directly with Ben myself as much for a while until he wrote the OP this week. So, I haven’t been following as closely how Ben construes ‘bad faith’, and I haven’t taken the opportunity to discover, if he were willing to relay it, what his model of bad faith is. I currently find some of his feelings of EAs he discusses with as acting in bad faith confusing. At least I don’t find them a compelling account of people’s real motivations in discourse.
I haven’t been following as closely how Ben construes ‘bad faith’, and I haven’t taken the opportunity to discover, if he were willing to relay it what his model of bad faith is.
Recently I’ve often found myself wishing for better (widely-understood) terminology for phenomena that it’s otherwise tempting to call “bad faith”, “intellectual dishonesty”, &c. I think it’s pretty rare for people to be consciously, deliberately lying, but motivated bad reasoning is horrifyingly ubiquitous and exhibits a lot of the same structural problems as deliberate dishonesty, in a way that’s worth distinguishing from “innocent” mistakes because of the way it responds to incentives. (As Upton Sinclair wrote, “It is difficult to get a man to understand something when his salary depends upon his not understanding it.”)
If our discourse norms require us to “assume good faith”, but there’s an important sense in which that assumption isn’t true (because motivated misunderstandings resist correction in a way that simple mistakes don’t), but we can’t talk about the ways it isn’t true without violating the discourse norm, then that’s actually a pretty serious problem for our collective sanity!
So, I’ve read the two posts on Benquo’s blog you’ve linked to. The first one “Bad Intent Is a Disposition, Not a Feeling”, depended on the claim he made that mens rea is not a real thing. As was pointed out in comments that he himself acknowledged those comments made some good points that would cause him to rethink the theme he was trying to impart with his original post. I looked up both the title of that post, and ‘mens rea’ on his blog to see if he had posted any updated thoughts on the subject. There weren’t results from the date of publication of that post onward on either of those topics on his blog, so it doesn’t appear he has publicly updated his thoughts on these topics. That was over 2 years ago.
The second post on the topic was more abstract and figurative, and was using some analogy and metaphor to get its conclusion across. So, I didn’t totally understand the relevance of all that in the second post to the first post, even though the second was intended as a sequel to the first. It seemed to me the crux of resolving the problem was:
Sadly, being honest about your sense that someone else is arguing in bad faith is Officially Not OK. It is read as a grave and inappropriate attack. And as long as that is the case, he could reasonably expect that bringing it up would lead to getting yelled at by everyone and losing the interaction. So maybe he felt and feels like he has no good options here.
Benquo’s conclusion that for public discourse and social epistemology, at least in his experience, that to be honest about your sense someone else is arguing in bad faith is Officially Not OK because it is always construed as a grave and inappropriate personal attack. So, resolving the issue appears socially or practically impossible. My experience is that just isn’t the case. It can lend itself to better modes of public discourse. One thing is it can move communities to states of discourse that are much different than where the EA and rationality communities currently are at. One problem is I’m not sure even those rationalists and EAs who are aware of such problems would prefer the options available, which would be just hopping onto different platforms with very different discourse norms. I would think that would be the most practical option, since the other viable alternative would be for these communities to adopt other communities’ discourse norms, and replace their own with them, wholesale. That seems extremely unlikely to happen.
Part of the problem is that it seems how Benquo construes ‘bad faith’ is as having an overly reductionistic definition. This was what was fleshed out in the comments on the original post on his blog, by commenters AGB and Res. So, that makes it hard for me to accept the frame Benquo bases his eventual conclusions off of. Another problem for me is the inferential distance gap between myself, Benquo, and the EA and rationality communities, respectively, are so large now that it would take a lot of effort to write them up and explain them all. Since it isn’t a super high priority for me, I’m not sure that I will get around to it. However, there is enough material in Benquo’s posts, and the discussion in the comments, that I can work with it to explain some of what I think is wrong with how he construes bad faith in these posts. If I write something like that up, I will post it on LW.
I don’t know if the EA community in large part disagrees with the OP for the same reasons I do. I think based off some of the material I have been provided with in the comments here, I have more to work with to find the cruxes of disagreement I have with how some people are thinking, whether critically or not, about the EA and rationality communities.
I understand the “Vassar Crowd” to be a group of Michael Vassar’s friends who:
were highly critical of EA.
were critical of somewhat less so of the rationality community.
were partly at odds with the bulk of the rationality community in not being as hostile to EA as they thought they should have been.
Maybe you meet those qualifications, but as I understand it the “Vassar Crowd” started publishing blog posts on LessWrong and their own personal blogs, as well as on social media, over the course of a few months starting in the latter half of 2016. It was part of a semi-coordinated effort. While I wouldn’t posit a conspiracy, it seems like a lot of these criticisms of EA were developed in conversations within this group, and, given the name of the group, I assume different people were primarily nudged by Vassar. This also precipitated of Alyssa Vance’s Long-Term World Improvement mailing list.
It doesn’t seem to have continued as a crowd to the present, as the lives of the people involved have obviously changed a lot, and it doesn’t appear from the outside it is as cohesive anymore, I assume in large part because of Vassar’s decreased participation in the community. Ben seems to be one of the only people who is sustaining the effort to criticize EA as the others were before.
So while I appreciate the disclosure, I don’t know if in my previous comment was precise enough, as far as I understand it was that the Vassar Crowd was more a limited clique that was manifested much more in the past than present.
The point I was trying to make is that much of the rationality community has nothing to do with the community’s stated values.
Yes, this is true, and also implies that the rationality community should be replaced with something very different, according to its stated goals. (Did you think I didn’t think that?)
So, in stating as though a fact about EA your personal impression of it based on Sarah’s blog post as if that means something unique about EA that isn’t true about other human communities, you’ve argued for too much.
I don’t think it’s unique! I think it’s extremely, extremely common for things to become aesthetic identity movements! This makes the phenomenon matter more, not less!
I have about as many beefs with the rationality movement as I do with the EA movement. I am commenting on this post because Ben already wrote it and I had things to add.
It’s possible that I should feel more moral pressure than I currently do to actively (not just, as a comment on other people’s posts) say what’s wrong about the current state of the rationality community publicly. I’ve already been saying things privately. (This is an invitation to try morally pressuring me, using arguments, if you think it would actually be good for me to do this)
Thanks for acknowledging my point about the rationality community. However, I was trying to get across more generally that I think the ‘aesthetic identity movement’ model might be lacking. If a theory makes the same predictions everywhere, it’s useless. I feel like the ‘aesthetic identity movement’ model might be one of those theories that is too general and not specific enough for me to understand what I’m supposed to take away from its use. For example:
So, the United States of America largely isn’t actually about being a land of freedom to which the world’s people may flock (which requires having everyone’s civil liberties consistently upheld, e.g., robust support for the rule of law, and not adding noise to conversations about these), it’s an aesthetic identify movement around the Founding Fathers as a central node, similar to, e.g., most popular environmentalism (which, for example, opposes nuclear power despite it being good for the environment, because nuclear power is discordant with the environmentalism identity/aesthetics, and Greenpeace is against it). This makes sense as an explanation of the sociological phenomenon, and also implies that, according to the stated value of America, America ought to be replaced with something very, very different.
Maybe if all kinds of things are aesthetic identity movements instead of being what htey actually say they are, I wouldn’t be as confused, if I knew what I am supposed to do with this information.
An aesthetic identity movement is one where everything is dominated by how things look on the surface, not what they actually do/mean in material reality. Performances of people having identities, not actions of people in reality. To some extent this is a spectrum, but I think there are attractor states of high/low performativity.
It’s possible for a state not to be an aesthetic identity movement, e.g. by having rule of law, actual infrastructure, etc.
It’s possible for a movement not to be an aesthetic identity movement, by actually doing the thing, choosing actions based on expected value rather than aesthetics alone, having infrastructure that isn’t just doing signalling, etc.
Academic fields have aesthetic elements, but also (some of the time) do actual investigation of reality (or, of reasoning/logic, etc) that turns up unexpected information.
Mass movements are more likely to be aesthetic identity movements than obscure ones. Movements around gaining resources through signalling are more likely to be aesthetic identity movements than ones around accomplishing objectives in material reality. (Homesteading in the US is an example of a historical movement around material reality)
(Note, EA isn’t only as aesthetic identity movement, but it is largely one, in terms of percentage of people, attention, etc; this is an important distinction)
It seems like the concept of “aesthetic identity movement” I’m using hasn’t been communicated to you well; if you want to see where I’m coming from more in more detail, read the following.
Yes, this is true, and also implies that the rationality community should be replaced with something very different, according to its stated goals. (Did you think I didn’t think that?)
I don’t think you didn’t think that. My question was to challenge you to answer why you, and the others if you would feel comfortable speaking to their perspectives, focus so much of your attention on EA instead of the rationality community (or other communities perhaps presenting the same kind and degree of problems), if you indeed understand they share similar problems, and posing similarly high stakes (e.g., failure modes of x-risk reduction).
I asked because it’s frustrating to me how inconsistent with your own efforts here to put way more pressure on EA than rationality. I’m guessing part of the reason for your trepidation in the rationality community is because you feel a sense of how much disruption it could cause, and how much risk nothing would change either. The same thing has happened when, not so much you, but some of your friends have criticized EA in the past. I was thinking it was because you are socially closer to the rationality community that you wouldn’t be as willing to criticize them.
I am not as invested in the rationality as a community as I was in the past. So, while I feel some personal responsibility to seek to analyze the intellectual failure modes of rationality, I don’t feel much of a moral urge anymore for correcting its social failure modes. So, I lack motivation to think through if it would be “good” or not for you to do it, though.
I think I actually do much more criticism of the rationality community than the EA community nowadays, although that might be invisible to you since most of it is private. (Anyway, I don’t do that much public criticism of EA either, so this seems like a strange complaint about me regardless)
Well, this was a question more about your past activity than the present activity, and also the greater activity of the same kind of some people you seem to know well, but I thought I would take the opportunity to ask you about it now. At any rate, thanks for taking the time to humour me.
My question was to challenge you to answer why you, and the others if you would feel comfortable speaking to their perspectives, focus so much of your attention on EA instead of the rationality community (or other communities perhaps presenting the same kind and degree of problems), if you indeed understand they share similar problems, and posing similarly high stakes (e.g., failure modes of x-risk reduction).
It doesn’t seem to me like anyone I interact with is still honestly confused about whether and to what extent e.g. CFAR can teach rationality, or rationality provides the promised superpowers. Whereas some people still believe a few core EA claims (like the one the OP criticizes) which I think are pretty implausible if you just look at them in conjunction and ask yourself what else would have to be true.
If you or anyone else want to motivate me to criticize the Rationality movement more, pointing me at people who continue to labor under the impression that the initial promises were achievable is likely to work; rude and condescending “advice” about how the generic reader (but not any particular person) is likely to feel the wrong way about my posts on EA is not likely to work.
So, I agree with the claim that EA has a lot of aesthetic-identity-elements going on that compound (and in many cases cause) the problem. I think that’s really important to acknowledge (although it’s not obvious that the solution needs to include starting over)
But I also think, in the case of this particular post, though, that the answer is simpler. The OP says:
Either charities like the Gates Foundation and Good Ventures are hoarding money at the price of millions of preventable deaths, or the low cost-per-life-saved numbers are wildly exaggerated. My former employer GiveWell in particular stands out as a problem here, since it publishes such cost-per-life-saved numbers
Which… sure uses language that sounds like it’s an attack on Givewell to me. seems:
[edit] The above paragraph
a) dishonest and/or false, in that it claims Givewell publishes such cost-per-life numbers, but at the moment AFAICT Givewell goes to great lengths to hide those numbers (i.e. to find the numbers of AMF you get redirected to a post about how to think about the numbers which links to a spreadsheet, which seems like the right procedure to me for forcing people to actually think a bit about the numbers)
b) uses phrases like “hoarding” and “wildly exaggerated” that I generally associate with coalition politics rather than denotive-language-that-isn’t-trying-to-be-enacting, while criticizing others for coalition politics, which seems a) like bad form, b) not like a process that I expect to result in something better-than-EA at avoiding pathologies that stem from coalition politics.
[double edit] to be clear, I do think it’s fair to criticize CEA and/or the EA community collectively for nonetheless taking the numbers as straightforward. And I think their approach to OpenAI deserves, at the very least, some serious scrutiny. (Although I think Ben’s claims about how off they are are overstated. This critique by Kelsey seems pretty straightforwardly true to me. AFAICT in this post Ben has made a technical error approximately of the same order of magnitude of what he’s claiming others are making)
My comment was a response to Evan’s, in which he said people are reacting emotionally based on identity. Evan was not explaining people’s response by referring to actual flaws in Ben’s argumentation, so your explanation is distinct from Evan’s.
a) GiveWell does publish cost-effectiveness estimates. I found them in a few clicks. So Ben’s claim is neither dishonest nor false.
b) So, the fact that you associate these phrases with coalitional politics, means Ben is attacking GiveWell? What? These phrases have denotative meanings! They’re pretty clear to determine if you aren’t willfully misinterpreting them! The fact that things that have clear denotative meanings get interpreted as attacking people is at the core of the problem!
To say that Ben creating clarity about what GiveWell is doing is an attack on GiveWell, is to attribute bad motives to GiveWell. It says that GiveWell wants to maintain a positive impression of itself, regardless of the facts, i.e. to defraud nearly everyone. (If GiveWell wants correct information about charities and charity evaluations to be available, then Ben is acting in accordance with their interests [edit: assuming what he’s saying is true], i.e. the opposite of attacking them).
Perhaps you endorse attributing bad motives to GiveWell, but in that case it would be hypocritical to criticize Ben for doing things that could be construed as doing that.
a) GiveWell does publish cost-effectiveness estimates. I found them in a few clicks. So Ben’s claim is neither dishonest nor false.
While I agree that this is a sufficient rebuttal of Ray’s “dishonest and/or false” charge (Ben said that GiveWell publishes such numbers, and GiveWell does, in fact, publish such numbers), it seems worth acknowleding Ray’s point about context and reduced visibility: it’s not misleading to publish potentially-untrustworthy (but arguably better than nothing) numbers surrounded by appropriate caveats and qualifiers, even when it would be misleading to loudly trumpet the numbers as if they were fully trustworthy.
That said, however, Ray’s “GiveWell goes to great lengths to hide those numbers” claim seems false to me in light of an email I received from GiveWell today (the occasion of my posting this belated comment), which reads, in part:
GiveWell has made a lot of progress since your last recorded gift in 2015. Our current top charities continue to avert deaths and improve lives each day, and are the best giving opportunities we’re aware of today. To illustrate, right now we estimate that for every $2,400 donated to Malaria Consortium for its seasonal malaria chemoprevention program, the death of a child will be averted.
Further update on this. Givewell has since posted this blogpost. I haven’t yet reviewed this enough have a strong opinion on it, but I think it at least explains some of the difference in epistemic state I had at the time of this discussion.
A few years ago, we decided not to feature our cost-effectiveness estimates prominently on our website. We had seen people using our estimates to make claims about the precise cost to save a life that lost the nuances of our analysis; it seemed they were understandably misinterpreting concrete numbers as conveying more certainty than we have. After seeing this happen repeatedly, we chose to deemphasize these figures. We continued to publish them but did not feature them prominently.
Over the past few years, we have incorporated more factors into our cost-effectiveness model and increased the amount of weight we place on its outputs in our reviews (see the contrast between our 2014 cost-effectiveness model versus our latest one). We thus see our cost-effectiveness estimates as important and informative.
We also think they offer a compelling motivation to donate. We aim to share these estimates in such a way that it’s reasonably easy for anyone who wants to dig into the numbers to understand all of the nuances involved.
These phrases have denotative meanings! They’re pretty clear to determine if you aren’t willfully misinterpreting them! The fact that things that have clear denotative meanings get interpreted as attacking people is at the core of the problem!
I wonder if it would help to play around with emotive conjugation? Write up the same denotative criticism twice, once using “aggressive” connotations (“hoarding”, “wildly exaggerated”) and again using “softer” words (“accumulating”, “significantly overestimated”), with a postscript that says, “Look, I don’t care which of these frames you pick; I’m trying to communicate the literal claims common to both frames.”
Claims to the contrary are either obvious nonsense, or marketing copy by the same people who brought you the obvious nonsense.
In most contexts when language liked this is used, it’s usually pretty clear that you are implying someone is doing something closer to deliberately lying than some softer kind of deception. I am aware Ben might have some model about how Givewell or others in EA are acting in bad faith in some other manner, involving self-deception. If that is what he is implying that Givewell or Good Ventures are doing instead of deliberately lying, that isn’t clear from the OP. He could have also stated the organizations in question are not fully aware they’re just marketing obvious nonsense, and had been immune to his attempts to point this out to them. If that is the case, but he didn’t state that in the OP either.
So, based on their prior experience, I believe it would appear to many people like he was implying Givewell, Good Ventures, and EA are deliberately lying. Deliberate lying is generally seen as a bad thing. So, to imply someone is deliberately lying seems to clearly be an attribution of bad motives to others. So if Ben didn’t expect or think that is how people would construe part of what he was trying to say, I don’t know what he was going for.
I think the current format isn’t good venue for me to continue the current discussion. For now, roughly, I disagree with the framing in your most recent comment, and stand by my previous comment.
I’ll try to write up a top level post that outlines more of my thinking here. I’d have some interest in a private discussion that gets turned into a google doc that gets turned into a post, or possibly some other format. I think public discussion threads are a uniquely bad format for this sort of thing.
Pretty weird that restating a bunch of things GiveWell says gets construed as an attack on GiveWell (rather than the people distorting what it says), and that people keep forgetting or not noticing those things, in directions that make giving based on GiveWell’s recommendations seem like a better deal than it is. Why do you suppose that is?
I believe it’s because people get their identities very caught up in EA, and for EAs focused on global poverty alleviation, Givewell and their recommended charities. So, when someone like you criticizes Givewell, a lot of them react in primarily emotional ways, creating a noisy space where the sound of messages like yours get lost. So, the points you’re trying to make about Givewell, or what similar points many others have tried making about Givewell, don’t stick to enough for enough of the EA community, or whoever else the relevant groups of people are. Thus, in the collective memory of the community, these things are forgotten or not noticed. Then, the cycle repeats itself each time you write another post like this.
So, EA largely isn’t about actually doing altruism effectively (which requires having correct information about what things actually work, e.g. estimates of cost per life saved, and not adding noise to conversations about these), it’s an aesthetic identity movement around GiveWell as a central node, similar to e.g. most popular environmentalism (which, for example, opposes nuclear power despite it being good for the environment, because nuclear power is discordant with the environmentalism identity/aesthetics, and Greenpeace is against it), which is also claiming credit for, literally, evaluating and acting towards the moral good (as environmentalism claims credit for evaluating and acting towards the health of the planet). This makes sense as an explanation of the sociological phenomenon, and also implies that, according to the stated values of EA, EA-as-it-is ought to be replaced with something very, very different.
[EDIT: noting that what you said in another comment also agrees with the aesthetic identity movement view: “I imagine the CEA does it through some combo of revering Singer, thinking it’s good for optics, and not thinking the level of precision at which there error is taking place is so grievous as to be objectionable in the context they’re presented in.”]
I agree with your analysis of the situation, but I wonder whether it’s possible to replace EA with anything that won’t turn into exactly the same thing. After all, the EA movement is the result of some people noticing that much of existing charity is like this, and saying “we should replace that with something very, very different”…
And EA did better than the previous things, along some important dimensions! And people attempting to do the next thing will have EA as an example to learn from, which will (hopefully) prompt them to read and understand sociology, game theory, etc. The question of “why do so many things turn into aesthetic identity movements” is an interesting and important one, and, through study of this (and related) questions, it seems quite tractable to have a much better shot at creating something that produces long-term value, than by not studying those questions.
Success is nowhere near guaranteed, and total success is quite unlikely, but, trying again (after a lot of study and reflection) seems like a better plan than just continuing to keep the current thing running.
I agree that studying this is quite important. (If, of course, such an endeavor is entered into with the understanding that everyone around the investigators, and indeed the investigators themselves, have an interest in subverting the investigation. The level of epistemic vigilance required for the task is very unusually high.)
It is not obvious to me that further attempts at successfully building the object-level structure (or even defining the object-level structure) are warranted, prior to having substantially advanced our knowledge on the topic of the above question. (It seems like you may already agree with me, on this; I am not sure if I’m interpreting your comment correctly.)
I’m going to flip this comment on you, so you can understand how I’m seeing it, and thus I fail to see why the point you’re trying to make matters.
One could nitpick about how HPMoR has done much more to save a number of lives through AI alignment than Givewell has ever done through developing-world interventions, and I’ll go share that info as from Jessica Taylor in defence of (at least some of) what Ben Hoffman is trying to achieve, perhaps among other places on the public internet, and we’ll see how that goes. The point I was trying to make is that much of the rationality community has nothing to do with the community’s stated values. So, in stating as though a fact about EA your personal impression of it based on Sarah’s blog post as if that means something unique about EA that isn’t true about other human communities, you’ve argued for too much.
Also, in this comment I indicated my awareness of what was once known as the “Vassar crowd”, which I recall you were a part of:
While we’re here, would you mind explaining with me what all of your beef was with the EA community as misleading in myriad ways to the point of menacing x-risk reduction efforts, and other pursuits of what is true and good, without applying the same pressure to parts of the rationality community that pose the same threat, or for that matter, any other group of people who does the same? What makes EA special?
This just seems obviously correct to me, and I think my failure to properly integrate this perspective until very recently has been extremely bad for my sanity and emotional well-being.
Specifically: if you fail to make a hard mental disinction between “rationality”-the-æsthetic-identity-movement and rationality-the-true-art-of-systematically-correct-reasoning, then finding yourself in a persistent disagreement with so-called “rationalists” about something sufficiently basic-seeming creates an enormous amount of cognitive dissonance (“Am I crazy? Are they crazy? What’s going on?? Auuuuuugh”) in a way that disagreeing with, say, secular humanists or arbitrary University of Chicago graduates, doesn’t.
But … it shouldn’t. Sure, self-identification with the “rationalist” brand name is a signal that someone knows some things about how to reason. And, so is graduating from the University of Chicago. How strong is each signal? Well, that’s an empirical question that you can’t answer by taking the brand name literally.
I thought the “rationalist” æsthetic-identity-movement’s marketing literature expressed this very poetically—
Of course, not everyone is stupid enough to make the mistake I made—I may have been unusually delusional in the extent to which I expected “the community” to live up to the ideals expressed in our marketing literature. For an example of someone being less stupid than recent-past-me, see the immortal Scott Alexander’s comments in “The Ideology Is Not the Movement” (“[...] a tribe much like the Sunni or Shia that started off with some pre-existing differences, found a rallying flag, and then developed a culture”).
This isn’t to say that the so-called “rationalist” community is bad, by the standards of our world. This is my æsthetic identity movement, too, and I don’t see any better community to run away to—at the moment. (Though I’m keeping an eye on the Quillette people.) But if attempts to analyze how we’re collectively failing to live up to our ideals are construed as an attack, that just makes us even worse than we already are at living up to our own ideals!
(Full disclosure: uh, I guess I would also count as part of the “Vassar crowd” these days??)
For Ben’s criticisms of EA, it’s my opinion that while I agree with many of his conclusions, I don’t agree with some of the strongest conclusions he reaches, or how he makes the arguments for them, simply because I believe they are not good arguments. This is common for interactions between EA and Ben these days, though Ben doesn’t respond to counter-arguments, as he often seems under the impression a counter-argument disagrees with Ben in a way he doesn’t himself agree with, his interlocutors are persistently acting in bad faith. I haven’t interacted directly with Ben myself as much for a while until he wrote the OP this week. So, I haven’t been following as closely how Ben construes ‘bad faith’, and I haven’t taken the opportunity to discover, if he were willing to relay it, what his model of bad faith is. I currently find some of his feelings of EAs he discusses with as acting in bad faith confusing. At least I don’t find them a compelling account of people’s real motivations in discourse.
I think the most relevant post by Ben here is “Bad Intent Is a Disposition, Not a Feeling”. (Highly recommended!)
Recently I’ve often found myself wishing for better (widely-understood) terminology for phenomena that it’s otherwise tempting to call “bad faith”, “intellectual dishonesty”, &c. I think it’s pretty rare for people to be consciously, deliberately lying, but motivated bad reasoning is horrifyingly ubiquitous and exhibits a lot of the same structural problems as deliberate dishonesty, in a way that’s worth distinguishing from “innocent” mistakes because of the way it responds to incentives. (As Upton Sinclair wrote, “It is difficult to get a man to understand something when his salary depends upon his not understanding it.”)
If our discourse norms require us to “assume good faith”, but there’s an important sense in which that assumption isn’t true (because motivated misunderstandings resist correction in a way that simple mistakes don’t), but we can’t talk about the ways it isn’t true without violating the discourse norm, then that’s actually a pretty serious problem for our collective sanity!
So, I’ve read the two posts on Benquo’s blog you’ve linked to. The first one “Bad Intent Is a Disposition, Not a Feeling”, depended on the claim he made that mens rea is not a real thing. As was pointed out in comments that he himself acknowledged those comments made some good points that would cause him to rethink the theme he was trying to impart with his original post. I looked up both the title of that post, and ‘mens rea’ on his blog to see if he had posted any updated thoughts on the subject. There weren’t results from the date of publication of that post onward on either of those topics on his blog, so it doesn’t appear he has publicly updated his thoughts on these topics. That was over 2 years ago.
The second post on the topic was more abstract and figurative, and was using some analogy and metaphor to get its conclusion across. So, I didn’t totally understand the relevance of all that in the second post to the first post, even though the second was intended as a sequel to the first. It seemed to me the crux of resolving the problem was:
Benquo’s conclusion that for public discourse and social epistemology, at least in his experience, that to be honest about your sense someone else is arguing in bad faith is Officially Not OK because it is always construed as a grave and inappropriate personal attack. So, resolving the issue appears socially or practically impossible. My experience is that just isn’t the case. It can lend itself to better modes of public discourse. One thing is it can move communities to states of discourse that are much different than where the EA and rationality communities currently are at. One problem is I’m not sure even those rationalists and EAs who are aware of such problems would prefer the options available, which would be just hopping onto different platforms with very different discourse norms. I would think that would be the most practical option, since the other viable alternative would be for these communities to adopt other communities’ discourse norms, and replace their own with them, wholesale. That seems extremely unlikely to happen.
Part of the problem is that it seems how Benquo construes ‘bad faith’ is as having an overly reductionistic definition. This was what was fleshed out in the comments on the original post on his blog, by commenters AGB and Res. So, that makes it hard for me to accept the frame Benquo bases his eventual conclusions off of. Another problem for me is the inferential distance gap between myself, Benquo, and the EA and rationality communities, respectively, are so large now that it would take a lot of effort to write them up and explain them all. Since it isn’t a super high priority for me, I’m not sure that I will get around to it. However, there is enough material in Benquo’s posts, and the discussion in the comments, that I can work with it to explain some of what I think is wrong with how he construes bad faith in these posts. If I write something like that up, I will post it on LW.
I don’t know if the EA community in large part disagrees with the OP for the same reasons I do. I think based off some of the material I have been provided with in the comments here, I have more to work with to find the cruxes of disagreement I have with how some people are thinking, whether critically or not, about the EA and rationality communities.
I’ll take a look at these links. Thanks.
I understand the “Vassar Crowd” to be a group of Michael Vassar’s friends who:
were highly critical of EA.
were critical of somewhat less so of the rationality community.
were partly at odds with the bulk of the rationality community in not being as hostile to EA as they thought they should have been.
Maybe you meet those qualifications, but as I understand it the “Vassar Crowd” started publishing blog posts on LessWrong and their own personal blogs, as well as on social media, over the course of a few months starting in the latter half of 2016. It was part of a semi-coordinated effort. While I wouldn’t posit a conspiracy, it seems like a lot of these criticisms of EA were developed in conversations within this group, and, given the name of the group, I assume different people were primarily nudged by Vassar. This also precipitated of Alyssa Vance’s Long-Term World Improvement mailing list.
It doesn’t seem to have continued as a crowd to the present, as the lives of the people involved have obviously changed a lot, and it doesn’t appear from the outside it is as cohesive anymore, I assume in large part because of Vassar’s decreased participation in the community. Ben seems to be one of the only people who is sustaining the effort to criticize EA as the others were before.
So while I appreciate the disclosure, I don’t know if in my previous comment was precise enough, as far as I understand it was that the Vassar Crowd was more a limited clique that was manifested much more in the past than present.
Yes, this is true, and also implies that the rationality community should be replaced with something very different, according to its stated goals. (Did you think I didn’t think that?)
Geeks, Mops, Sociopaths happened to the rationality community, not just EA.
I don’t think it’s unique! I think it’s extremely, extremely common for things to become aesthetic identity movements! This makes the phenomenon matter more, not less!
I have about as many beefs with the rationality movement as I do with the EA movement. I am commenting on this post because Ben already wrote it and I had things to add.
It’s possible that I should feel more moral pressure than I currently do to actively (not just, as a comment on other people’s posts) say what’s wrong about the current state of the rationality community publicly. I’ve already been saying things privately. (This is an invitation to try morally pressuring me, using arguments, if you think it would actually be good for me to do this)
Thanks for acknowledging my point about the rationality community. However, I was trying to get across more generally that I think the ‘aesthetic identity movement’ model might be lacking. If a theory makes the same predictions everywhere, it’s useless. I feel like the ‘aesthetic identity movement’ model might be one of those theories that is too general and not specific enough for me to understand what I’m supposed to take away from its use. For example:
Maybe if all kinds of things are aesthetic identity movements instead of being what htey actually say they are, I wouldn’t be as confused, if I knew what I am supposed to do with this information.
An aesthetic identity movement is one where everything is dominated by how things look on the surface, not what they actually do/mean in material reality. Performances of people having identities, not actions of people in reality. To some extent this is a spectrum, but I think there are attractor states of high/low performativity.
It’s possible for a state not to be an aesthetic identity movement, e.g. by having rule of law, actual infrastructure, etc.
It’s possible for a movement not to be an aesthetic identity movement, by actually doing the thing, choosing actions based on expected value rather than aesthetics alone, having infrastructure that isn’t just doing signalling, etc.
Academic fields have aesthetic elements, but also (some of the time) do actual investigation of reality (or, of reasoning/logic, etc) that turns up unexpected information.
Mass movements are more likely to be aesthetic identity movements than obscure ones. Movements around gaining resources through signalling are more likely to be aesthetic identity movements than ones around accomplishing objectives in material reality. (Homesteading in the US is an example of a historical movement around material reality)
(Note, EA isn’t only as aesthetic identity movement, but it is largely one, in terms of percentage of people, attention, etc; this is an important distinction)
It seems like the concept of “aesthetic identity movement” I’m using hasn’t been communicated to you well; if you want to see where I’m coming from more in more detail, read the following.
Geeks, MOPs, and sociopaths
Identity and its Discontents
Naming the Nameless
On Drama
Optimizing for Stories (vs. Optimizing Reality)
Excerpts from a larger discussion about simulacra
(no need to read all of these if it doesn’t seem interesting, of course)
I will take a look at them. Thanks.
I don’t think you didn’t think that. My question was to challenge you to answer why you, and the others if you would feel comfortable speaking to their perspectives, focus so much of your attention on EA instead of the rationality community (or other communities perhaps presenting the same kind and degree of problems), if you indeed understand they share similar problems, and posing similarly high stakes (e.g., failure modes of x-risk reduction).
I asked because it’s frustrating to me how inconsistent with your own efforts here to put way more pressure on EA than rationality. I’m guessing part of the reason for your trepidation in the rationality community is because you feel a sense of how much disruption it could cause, and how much risk nothing would change either. The same thing has happened when, not so much you, but some of your friends have criticized EA in the past. I was thinking it was because you are socially closer to the rationality community that you wouldn’t be as willing to criticize them.
I am not as invested in the rationality as a community as I was in the past. So, while I feel some personal responsibility to seek to analyze the intellectual failure modes of rationality, I don’t feel much of a moral urge anymore for correcting its social failure modes. So, I lack motivation to think through if it would be “good” or not for you to do it, though.
I think I actually do much more criticism of the rationality community than the EA community nowadays, although that might be invisible to you since most of it is private. (Anyway, I don’t do that much public criticism of EA either, so this seems like a strange complaint about me regardless)
Well, this was a question more about your past activity than the present activity, and also the greater activity of the same kind of some people you seem to know well, but I thought I would take the opportunity to ask you about it now. At any rate, thanks for taking the time to humour me.
It doesn’t seem to me like anyone I interact with is still honestly confused about whether and to what extent e.g. CFAR can teach rationality, or rationality provides the promised superpowers. Whereas some people still believe a few core EA claims (like the one the OP criticizes) which I think are pretty implausible if you just look at them in conjunction and ask yourself what else would have to be true.
If you or anyone else want to motivate me to criticize the Rationality movement more, pointing me at people who continue to labor under the impression that the initial promises were achievable is likely to work; rude and condescending “advice” about how the generic reader (but not any particular person) is likely to feel the wrong way about my posts on EA is not likely to work.
So, I agree with the claim that EA has a lot of aesthetic-identity-elements going on that compound (and in many cases cause) the problem. I think that’s really important to acknowledge (although it’s not obvious that the solution needs to include starting over)
But I also think, in the case of this particular post, though, that the answer is simpler. The OP says:
Which… sure uses language that sounds like it’s an attack on Givewell to me. seems:
[edit] The above paragraph
a) dishonest and/or false, in that it claims Givewell publishes such cost-per-life numbers, but at the moment AFAICT Givewell goes to great lengths to hide those numbers (i.e. to find the numbers of AMF you get redirected to a post about how to think about the numbers which links to a spreadsheet, which seems like the right procedure to me for forcing people to actually think a bit about the numbers)
b) uses phrases like “hoarding” and “wildly exaggerated” that I generally associate with coalition politics rather than denotive-language-that-isn’t-trying-to-be-enacting, while criticizing others for coalition politics, which seems a) like bad form, b) not like a process that I expect to result in something better-than-EA at avoiding pathologies that stem from coalition politics.
[double edit] to be clear, I do think it’s fair to criticize CEA and/or the EA community collectively for nonetheless taking the numbers as straightforward. And I think their approach to OpenAI deserves, at the very least, some serious scrutiny. (Although I think Ben’s claims about how off they are are overstated. This critique by Kelsey seems pretty straightforwardly true to me. AFAICT in this post Ben has made a technical error approximately of the same order of magnitude of what he’s claiming others are making)
My comment was a response to Evan’s, in which he said people are reacting emotionally based on identity. Evan was not explaining people’s response by referring to actual flaws in Ben’s argumentation, so your explanation is distinct from Evan’s.
a) GiveWell does publish cost-effectiveness estimates. I found them in a few clicks. So Ben’s claim is neither dishonest nor false.
b) So, the fact that you associate these phrases with coalitional politics, means Ben is attacking GiveWell? What? These phrases have denotative meanings! They’re pretty clear to determine if you aren’t willfully misinterpreting them! The fact that things that have clear denotative meanings get interpreted as attacking people is at the core of the problem!
To say that Ben creating clarity about what GiveWell is doing is an attack on GiveWell, is to attribute bad motives to GiveWell. It says that GiveWell wants to maintain a positive impression of itself, regardless of the facts, i.e. to defraud nearly everyone. (If GiveWell wants correct information about charities and charity evaluations to be available, then Ben is acting in accordance with their interests [edit: assuming what he’s saying is true], i.e. the opposite of attacking them).
Perhaps you endorse attributing bad motives to GiveWell, but in that case it would be hypocritical to criticize Ben for doing things that could be construed as doing that.
While I agree that this is a sufficient rebuttal of Ray’s “dishonest and/or false” charge (Ben said that GiveWell publishes such numbers, and GiveWell does, in fact, publish such numbers), it seems worth acknowleding Ray’s point about context and reduced visibility: it’s not misleading to publish potentially-untrustworthy (but arguably better than nothing) numbers surrounded by appropriate caveats and qualifiers, even when it would be misleading to loudly trumpet the numbers as if they were fully trustworthy.
That said, however, Ray’s “GiveWell goes to great lengths to hide those numbers” claim seems false to me in light of an email I received from GiveWell today (the occasion of my posting this belated comment), which reads, in part:
(Bolding mine.)
Further update on this. Givewell has since posted this blogpost. I haven’t yet reviewed this enough have a strong opinion on it, but I think it at least explains some of the difference in epistemic state I had at the time of this discussion.
Relevant bit:
A friend also recently mentioned getting this email to me, and yes, this does significantly change my outlook here.
I wonder if it would help to play around with emotive conjugation? Write up the same denotative criticism twice, once using “aggressive” connotations (“hoarding”, “wildly exaggerated”) and again using “softer” words (“accumulating”, “significantly overestimated”), with a postscript that says, “Look, I don’t care which of these frames you pick; I’m trying to communicate the literal claims common to both frames.”
When he wrote:
In most contexts when language liked this is used, it’s usually pretty clear that you are implying someone is doing something closer to deliberately lying than some softer kind of deception. I am aware Ben might have some model about how Givewell or others in EA are acting in bad faith in some other manner, involving self-deception. If that is what he is implying that Givewell or Good Ventures are doing instead of deliberately lying, that isn’t clear from the OP. He could have also stated the organizations in question are not fully aware they’re just marketing obvious nonsense, and had been immune to his attempts to point this out to them. If that is the case, but he didn’t state that in the OP either.
So, based on their prior experience, I believe it would appear to many people like he was implying Givewell, Good Ventures, and EA are deliberately lying. Deliberate lying is generally seen as a bad thing. So, to imply someone is deliberately lying seems to clearly be an attribution of bad motives to others. So if Ben didn’t expect or think that is how people would construe part of what he was trying to say, I don’t know what he was going for.
I think the current format isn’t good venue for me to continue the current discussion. For now, roughly, I disagree with the framing in your most recent comment, and stand by my previous comment.
I’ll try to write up a top level post that outlines more of my thinking here. I’d have some interest in a private discussion that gets turned into a google doc that gets turned into a post, or possibly some other format. I think public discussion threads are a uniquely bad format for this sort of thing.