I haven’t read your entire series of posts on Givewell and effective altruism. So I’m basing this comment mostly off of just this post. It seems like it is jumping all over the place.
You say:
Either charities like the Gates Foundation and Good Ventures are hoarding money at the price of millions of preventable deaths, or the low cost-per-life-saved numbers are wildly exaggerated. My former employer GiveWell in particular stands out as a problem here, since it publishes such cost-per-life-saved numbers, and yet recommended to Good Ventures that it not fully fund GiveWell’s top charities; they were worried that this would be an unfair way to save lives.
This sets up a false dichotomy. Both the Gates Foundation and Good Ventures are focused on areas in addition to funding interventions in the developing world. Obviously, they both believe those other areas, e.g., in Good Ventures’ case, existential risk reduction, present them with the opportunity to prevent just as many, if not more, deaths than interventions in the developing world. Of course, a lot of people disagree with the idea something like AI alignment, which Good Ventures funds, is in any way comparable to cost-effective interventions into the developing world in terms of how many deaths it prevents, its cost-effectiveness, or its moral value. Yet based on how you used to work for Givewell, and you’re now much more focused on AI alignment, it doesn’t seem like you’re one of those people.
If you were one of those people, you would be the kind of person to think that Good Ventures not spending all their money on developing-world interventions, and instead spreading out their grants over time to shape the longer-term future in terms of AI safety and other focus areas, quite objectionable. If you are that kind of person, i.e., you believe it is indeed objectionable Good Ventures is, from the viewpoint of thinking their top priority should be developing-world interventions, ‘hoarding’ their money for other focus areas like AI alignment is objectionable, that is not at all clear or obvious.
Unless you believe that, then, right here, there is a third option other than “Gates Foundation and Good Ventures are hoarding money at the price of millions of deaths”, and the “numbers are wildly exaggerated”. That is, both foundations believe the money they are reserving for focus areas other than developing-world interventions aren’t being hoarded at the expense of millions of lives. Presumably, this is because both foundations also believe the counterfactual expected value of these other focus areas is at least comparable to the expected value of developing-world interventions.
If the Gates Foundation and Good Ventures appear not to, across the proportions of their endowments they’ve respectively allotted for developing-world interventions and other focus areas, be not giving away their money as quickly as they could while still being as effective as possible, then you objecting to it would make sense. However, that would be a separate thesis that you haven’t covered in this post. Were you to put forward such a thesis, you’ve already laid out the case for what’s wrong with a foundation like Good Ventures not fully funding each year the developing-world interventions of Givewell’s recommended charities.
Yet you would still need to make additional arguments for what Good Ventures is doing wrong in only granting to another focus area like AI alignment as much as they annually are now, instead of grantmaking at a much higher annual rate or volume. Were you to do that, it would be appropriate to point out what is wrong with the reasons an organization like the Open Philanthropy Project (Open Phil) doesn’t grant much more to their other focus areas each year.
For example, one reason it wouldn’t make sense for AI alignment to be granting in total each year 100x as much to AI risk as they are now, starting this year, is because it’s not clear AI risk as a field currently has that much room for more funding. It is at least not clear AI risk organizations could sustain such a high growth rate assuming their grants from Open Phil were 100x bigger than they are now. That’s an entirely different point than any you made in this post. Also, as far as I’m aware, that isn’t an argument you’ve made anywhere else.
Given that you are presumably familiar with these considerations, it seems to me you should have been able to anticipate the possibility of the third option. In other words, unless you’re going to make the case that either:
it is objectionable for a foundation like Good Ventures to reserve some of their endowment for the long-term development of a focus area like AI risk, instead of using it all to fund cost-effective developing-world interventions, and/or;
it is objectionable Good Ventures isn’t funding AI alignment more than they currently are, and why;
you should have been able to tell in advance the dichotomy you presented is indeed a false one. It seems like of the two options in the dichotomy you presented, you believe cost-effectiveness estimates like those from Givewell are wildly exaggerated. I don’t know why you presented it as though you thought it might just as easily be one of the two scenarios you presented, but the fact you’re exactly the kind of person who should have been able to anticipate a plausible third scenario and didn’t undermines the point you’re trying to make.
Either scenario clearly implies that these estimates are severely distorted and have to be interpreted as marketing copy designed to control your behavior, not unbiased estimates designed to improve the quality of your decisionmaking process.
One thing that falls out of my above commentary is that since it is not clearly the case that is only one of the two scenarios you presented is true, it is not necessarily the case either that the mentioned cost-effectiveness estimates “have to be interpreted as marketing copy designed to control your behaviour”. What’s more, you’ve presented another false dichotomy here. It is not the case Givewell’s cost-effectiveness estimates must be only and exclusively one of either:
severely distorted marketing copy designed for behavioural control.
unbiased estimates designed to improve the quality of your decision-making process.
Obviously, Givewell’s estimates aren’t unbiased. I don’t recall Givewell ever claiming to be unbiased, although it is a problem for other actors in EA to treat Givewell’s cost-effectiveness estimates as unbiased. I recall from reading a couple posts from you series on Givewell it seemed as though you were trying to hold Givewell responsible for the exaggerated rhetoric made by others in EA using Givewell’s cost-effectiveness estimates. It seems like you’re doing that again now. I never understood then, and I don’t understand now, why you’ve tried explaining all this as if Givewell is responsible for how other people are misusing their numbers. Perhaps Givewell should do more to discourage a culture of exaggeration and bluster in EA built on people using their cost-effectiveness estimates and prestige as a charity evaluator to make claims about developing-world interventions that aren’t actually backed up by Givewell’s research and analysis.
Yet that is another, different argument you would have to make, and one that you didn’t. To hold Givewell as exclusively culpable for how their cost-effectiveness estimates and analyses have been misused as you have, in the past and present, would only be justified by some kind of evidence Givewell is actively trying to cultivate a culture of exaggeration and bluster and shiny-distraction-via-prestige around themselves. I’m not saying no such evidence exists, but if it does, you haven’t presented any of it.
We should be more skeptical, not less, of vague claims by the same parties to even more spectacular returns on investment for speculative, hard to evaluate interventions, especially ones that promise to do the opposite of what the argument justifying the intervention recommends.
You make this claim as though it might be the exact same people in the organizations of Givewell, Open Phil, and Good Ventures who are responsible for all the following decisions:
presenting Givewell’s cost-effectiveness estimates in the way they do.
making recommendations to Good Ventures via Givewell about how much Good Ventures should grant to each of Givewell’s recommended charities.
Good Ventures’ stake in OpenAI.
However, it isn’t the same people making all of these decisions across these 3 organizations.
Dustin Moskowitz and Cari Tuna are ultimately responsible for what kinds of grants Good Ventures makes, regardless of focus area, but they obviously delegate much decision-making to Open Phil.
Good Ventures obviously has tremendous influence over how Givewell conducts their research and analysis to reach particular cost-effectiveness estimates, but by all appearances Good Ventures appears to have let Givewell operate with a great deal of autonomy, and haven’t been trying to influence Givewell to dramatically alter how they conduct their research and analysis. Thus, it would make sense to look to Givewell, and not Good Ventures, for what to make of their research and analysis.
Elie Hassenfeld is the current executive director of Givewell, and thus is the one to be held ultimately accountable for Givewell’s cost-effectiveness estimates, and recommendations to Good Ventures. Holden Karnofsky is a co-founder of Givewell, but for a long time has been focusing full-time on his role as executive director of Open Phil. Holden no longer co-directs Givewell with Elie.
As ED of Open Phil, Holden has spearheaded Open Phil’s work in, and Good Ventures’ funding of, AI risk research.
That their is a division of labour whereby Holden has led Open Phil’s work, and Elie Givewell’s, has been common knowledge in the effective altruism movement for a long time.
What many people disagreed with about Open Phil recommending Good Ventures take a stake in OpenAI, and Holden Karnofsky consequently being made a Board Member of Open Phil, is based on the particular roles played by the people involved in the grant investigation that I won’t go through here. Also, like yourself, on the expectation OpenAI may make the state of things in AI risk worse rather than better, based on either OpenAI’s ignorance or misunderstanding of how AI alignment research should be conducted, at least in the eyes of many people in the rationality and x-risk reduction communities.
The assertion Givewell is wildly exaggerating their cost-effectiveness estimates is an assertion the numbers are being fudged at a different organization than Open Phil. The common denominator is of course that Good Ventures made grants made on recommendations from both Open Phil and Givewell. Holden and Elie are co-founders of both Open Phil and Givewell. However, with the two separate cases of Givewell’s cost-effectiveness estimates, and Open Phil’s process for recommending Good Ventures take a stake in OpenAI, it is two separate organizations, run by two separate teams, led separately by Elie and Holden respectively. If in each of the cases you present of Givewell, and Open Phil’s support for OpenAI, something wrong has been done, they are two very different kinds of mistakes made for very different reasons.
Again, Good Ventures is ultimately accountable for grants made in both cases. You could hold each organization accountable separately, but when you refer to them as the “same parties”, you’re making it out as though Good Ventures, and their satellite organizations, are either, generically, incompetent or dishonest. I say ” generically”, because while you set it up that way, you know as well as anyone the specific ways in which the two cases of Givewell’s estimates, and Open Phil’s/Good Venture’s relationship with OpenAI, differ. You know this because you have been one of, if not the most, prominent individual critic in both cases for the last few years.
Yet when you call them all the “same parties”, you’re treating both cases as if the ‘family’ of Good Ventures and surrounding organizations generally can’t be trusted, because it’s opaque to us how they come to make these decisions that lead to dishonest or mistaken outcomes as you’ve alleged. Yet you’re one of the people who made clear to everyone else how the decisions were made; who were the different people/organizations who made the decisions; and what one might find objectionable about them.
To substantiate the claim the two different cases of Givewell’s estimates, and Open Phil’s relationship to OpenAI, are sufficient grounds to reach the conclusion none of these organizations, nor their parent foundation Good Ventures, can generally be trusted, you could have held Good Ventures accountable for not being diligent enough in monitoring the fidelity of the recommendations they receive from either Givewell and Open Phil. Yet you didn’t do that. You could have also, now or in the past, tried to make the arguments Givewell and Open Phil should each separately be held accountable for what you see as their mistakes on in the two separate cases. Yet you didn’t do that either.
Making any of those arguments would have made sense. Yet what you did is you treated it as though Givewell, Open Phil, and Good Ventures all play the same kind of role in both cases. Not even all 3 organizations are involved in both cases. To summarize all this, the two cases of Givewell’s estimates, and Open Phil’s relationship to OpenAI, if they are problematic, are not the same kinds of problems caused by Good Ventures for the same reasons. Yet you’re making it out as though they are.
It might make more sense if you were someone else who just saw the common connection of Good Ventures, and didn’t know how to go about criticizing them other than to point out they were sloppy in both cases. Yet you know everything I’ve mentioned about who the different people are in each of the two cases, and the different kinds of decisions each organization is responsible for, and how they differ in how they make those decisions. So, you know how to hold each organization separately accountable for what you see as their separate mistakes. You know these things because you:
identified as an effective altruist for several years.
have been a member of the rationality community for several years.
are a former employee of Givewell.
have transitioned since you’ve left Givewell to focusing more of your time on AI alignment.
Yet you make it out as though Good Ventures, Givewell, and Open Phil are some unitary blob that makes poor decisions. If you wanted to make any one of, or even all, the other specific, alternative arguments I suggested about how to hold each of the 3 organizations individually accountable, it would have been a lot easier for you to make a solid and convincing argument than the one you’ve actually made regarding these organizations. Yet because you didn’t, this is another instance of you undermining what you yourself are trying to accomplish with a post like this.
As far as I can see, this pretty much destroys the generic utilitarian imperative to live like a monk and give all your excess money to the global poor or something even more urgent.
You started this post off with what’s wrong with Peter Singer’s cost-effectiveness estimates from his 1997 essay. Then you pointed out what you see as being wrong similarly done by specific EA-aligned organizations today. Then you bridge to how, because funding gaps are illusory given the erroneous cost-effectiveness estimates, the Gates Foundation and Good Ventures are doing much less than they should with regards to developing-world interventions.
Then, you zoom in on what you see as the common pattern of bad recommendations being given to Good Ventures by Open Phil and Givewell. Yet the two cases of recommendations you’ve provided are from these 2 separate organizations who make their decisions and recommendations in very different ways, and are run by 2 different teams of staff, as I pointed out above. And as it’s I’ve established you’ve known all this in intimate detail for years, you’re making arguments that make much less sense than the ones you could have made based on the information available to you.
None of that has anything to do with the Gates Foundation. You told me in response to another comment I made on this post that it was another recent discussion on LW where the Gates Foundation came up that inspired you to make this post. You made your point about the Gates Foundation. Then, that didn’t go anywhere, because you made unrelated points about unrelated organizations.
For the record, when you said:
If you give based on mass-marketed high-cost-effectiveness representations, you’re buying mass-marketed high-cost-effectiveness representations, not lives saved. Doing a little good is better than buying a symbolic representation of a large amount of good. There’s no substitute for developing and acting on your own models of the world.
and
Spend money on taking care of yourself and your friends and the people around you and your community and trying specific concrete things that might have specific concrete benefits.
none of that applies to the Gates Foundation, because the Gates Foundation isn’t an EA-aligned organization “mass-marketing high cost effectiveness representations” in a bid to get small, individual donors to build up a mass movement of effective charitable giving to fill illusory funding gaps they could easily fill themselves. Other things being equal, the Gates Foundation could obviously fill the funding gap. None of the rest of those things apply to the Gates Foundation, though, and they would have to for it to make sense that this post, and its thesis, were inspired by mistakes being made by the Gates Foundation, not just EA-aligned organizations.
However, going back to “the generic utilitarian imperative to live like a monk and give all your excess money to the global poor or something even more urgent”, it seems like you’re claiming the thesis of Singer’s 1997 essay, and the basis for effective altruism as a movement(?), are predicated exclusively on reliably nonsensical cost-effectiveness estimates from Givewell/Open Phil, not just for developing-world interventions, but in general. None of that is true, because Singer’s thesis is not based exclusively on a specific set of cost-effectiveness estimates about specific causes form specific organizations, and Singer’s thesis isn’t the exclusive basis for the effective altruism movement. Even if that was a logically valid argument, your conclusion would not be sound either way, because, as I’ve pointed out above, the premise that it makes sense to treat Givewell, Open Phil, and Good Ventures, like a unitary actor, is false.
In other words, because ” mass-marketed high-cost-effectiveness representations” are not the foundation of “the generic utilitarian imperative to live like a monk and give all your excess money to the global poor or something even more urgent” in general, and certainly isn’t some kind of primary basis for effective altruism if that was something you were suggesting, your conclusion destroys nothing.
To summarize:
you knowingly presented a false dichotomy about why the Gates Foundation and Good Ventures don’t donate their entire endowments to developing-world interventions.
you knowingly set up a false dichotomy whereby everyone has been acting the whole time as if it’s the case Givewell’s and Open Phil’s cost-effectiveness estimates are unbiased, or the reason they are wildly exaggerated is because those organizations are trying to deliberately manipulate people’s behaviour.
you cannot deny you could not have been cognizant of the fact these dichotomies are false, because the evidence with which you present them are your own prior conclusions drawn in part from your personal and professional experiences.
you said this post was inspired by the point you made about the Gates Foundation, but that has nothing to do with the broader arguments you’ve made about Good Ventures, Open Phil, or Givewell, and those arguments don’t back the conclusion you’ve consequently drawn about utilitarianism and effective altruism.
In this post, you’ve raised some broad concerns of things happening in the effective altruism movement I think are worth serious consideration.
My former employer GiveWell in particular stands out as a problem here, since it publishes such cost-per-life-saved numbers, and yet recommended to Good Ventures that it not fully fund GiveWell’s top charities; they were worried that this would be an unfair way to save lives.
I don’t believe the rationale for why Givewell doesn’t recommend to Good Ventures to fully fund Givewell’s top charities totally holds up, and I’d like to understand better why they don’t. I think Givewell maybe should recommend Good Ventures fully fund their own top charities each year.
Spend money on taking care of yourself and your friends and the people around you and your community and trying specific concrete things that might have specific concrete benefits.
That EA has a tendency to move people in a direction too far away from these more ordinary and concrete aspects of their lives is a valid one.
I am also unhappy with much of what has happened relating to OpenAI.
All these are valid concerns that would be much easier to take more seriously from you if you presented arguments for them on their own, as opposed to presenting them as a few of many different assertions that related to each other, at best, in a very tenuous manner, in a big soup of an argument against effective altruism that doesn’t logically hold up, based on the litany of unresolved issues with it I’ve pointed out above. It’s also not clear why you wouldn’t have realized any of this before you made this post, based on all the knowledge that served as the evidence you used for your premises in this post you had before you made this post, as it was information you yourself published on the internet.
Even if all the apparent leaps of logic you’ve made in this post are artifacts of this post being a truncated summary of your entire, extensive series of posts on Givewell, and EA, the entire structure of this one post undermines the point(s) you’re trying to make with it.
I think I can summarize my difficulties with this comment a bit better now.
(1) It’s quite long, and brings up many objections that I dealt with in detail in the longer series I linked to. There will always be more excuses someone can generate that sound facially plausible if you don’t think them through. One has to limit scope somehow, and I’d be happy to get specific constructive suggestions about how to do that more clearly.
(2) You’re exaggerating the extent to which Open Philanthropy Project, Good Ventures, and GiveWell, have been separate organizations. The original explanation of the partial funding decision—which was a decision about how to recommend allocating Good Ventures’s capital—was published under the GiveWell brand, but under Holden’s name. My experience working for the organizations was broadly consistent with this. If they’ve since segmented more, that sounds like an improvement, but doesn’t help enough with the underlying revealed preferences problem.
I’d be happy to get specific constructive suggestions about how to do that more clearly.
I don’t know that this suggestion is best – it’s a legitimately hard problem – but a policy I think would be pretty reasonable is:
When responding to lengthy comments/posts that include at least 1-2 things you know you dealt with in a longer series, one option is to simply leave it at: “hmm, I think it’d make more sense for you to read through this longer series and think carefully about it before continuing the discussion” rather than trying to engage with any specific points.
And then shifting the whole conversation into a slower mode, where people are expected to take a day or two in between replies to make sure they understand all the context.
(I think I would have had similar difficulty responding to Evan’s comment as what you describe here)
To clarify a bit—I’m more confused about how to make the original post more clearly scope-limited, than about how to improve my commenting policy.
Evan’s criticism in large part deals with the facts that there are specific possible scenarios I didn’t discuss, which might make more sense of e.g. GiveWell’s behavior. I think these are mostly not coherent alternatives, just differently incoherent ones that amount to changing the subject.
It’s obviously not possible to discuss every expressible scenario. A fully general excuse like “maybe the Illuminati ordered them to do it as part of a secret plot,” for instance, doesn’t help very much, since that posits an exogenous source of complications that isn’t very strongly constrained by our observations, and doesn’t constrain our future anticipations very well. We always have to allow for the possibility that something very weird is going on, but I think “X or Y” is a reasonable short hand for “very likely, X or Y” in this context.
On the other hand, we can’t exclude scenarios arbitrarily. It would have been unreasonable for me, on the basis of the stated cost-per-life-saved numbers, to suggest that the Gates Foundation is, for no good reason, withholding money that could save millions of lives this year, when there’s a perfectly plausible alternative—that they simply don’t think this amazing opportunity is real. This is especially plausible when GiveWell itself has said that its cost per life saved numbers don’t refer to some specific factual claim.
“Maybe partial funding because AI” occurred to enough people that I felt the need to discuss it in the long series (which addressed all the arguments I’d heard up to that point), but ultimately it amounts to a claim that all the discourse about saving “dozens of lives” per donor is beside the point since there’s a much higher-leverage thing to allocate funds to—in which case, why even engage with the claim in the first place?
Any time someone addresses a specific part of a broader issue, there will be countless such scope limitations, and they can’t all be made explicit in a post of reasonable length.
Yet what you did is you treated it as though Givewell, Open Phil, and Good Ventures all play the same kind of role in both cases. Not even all 3 organizations are involved in both cases.
They share a physical office! Good Ventures pays for it! I’m not going to bother addressing comments this long in depth when they’re full of basic errors like this.
For the record, this is no longer going to be true starting in I think about a month, since GiveWell is moving to Oakland and Open Phil is staying in SF.
1. Givewell focuses on developing-world interventions, and not AI alignment, or any other focus area of Open Phil other than developing-world interventions, which means they’re aren’t responsible for anything to do with OpenAI.
2. It’s unclear from you what write what role, if any, Open Phil plays in the relationship between Givewell and Good Ventures in Givewell’s annual recommendations to Good Ventures. If it was clear Open Phil was an intermediary in that regard somehow, then you treating all 3 projects under 1 umbrella as 1 project with no independence between any of them might make sense. You didn’t establish that, so it doesn’t make sense.
3. Good Ventures signs off on all the decisions Givewell and Open Phil make, and they should be held responsible for the decisions of both Givewell and Open Phil. Yet you know that that there are people who work for Givewell and Open Phil who make decisions that are completed before Good Ventures signs off on them. Or I assume you do, since you worked for Givewell. If you somehow know it’s all-top down both ways, that Good Ventures tells Open Phil and Givewell each what they want from them, and Open Phil and Givewell just deliver the package, then say so.
Yes, they do share the same physical office. Yes, Good Ventures pays for it. Shall I point to mistakes made by one of MIRI, CFAR, or LW, but not more than one, and then link the mistake made, whenever, and however tenuously, to all of those organizations?
Should I do the same to any two or more other AI alignment/x-risk organizations you favour, who share offices or budgets in some way?
Shall I point out to all the communities of x-risk reduction, long-term world improvement, EA, and rationality that Michael Arc/Vassar and some of his friends formed a “Vassar crowd” that formed a cell aimed at unilaterally driving a wedge between x-risk/rationality and EA, which included you, Sarah Constantin, Michael Arc, Alyssa Vance, among others? Should I not hold you or Michael Arc individually responsible for the things you’ve done since then that have caused you to have a mixed reputation, or should I castigate all of you and Michael’s friends in the bunch too, along with as much of the rationality community as I feel like? After all, you’re all friends, and you decided to make the effort together, even though you each made your own individual contributions.
I won’t do those things. Yet that is what it would be for me to behave as you are behaving. I’ll ask you one more question about what you might do: when can I expect you to publicly condemn FHI on the grounds it’s justified to do so because FHI is right next door to CEA, yet Nick Bostrom lacks the decency to go over there and demand the CEA stop posting misleading stats, lest FHI break with the EA community forevermore?
I’m not going to bother addressing comments this long in depth when they’re full of basic errors like this.
While there is what you see as at least one error in my post, there are many items I see as errors in your post I will bring to everyone’s attention. It will be revised, edited, and polished to not have what errors you see in it, or at least it will be clear enough what I am and am not saying won’t be ambiguous. It will be a top-level article on both the EA Forum and LW. A large part of it is going to be that you at best are using extremely sloppy arguments, and at worst are making blatant attempts to use misleading info to convince others to do what you want, just as you accuse Good Ventures, Open Phil, and Givewell of doing. One theme will be that you’re still in the x-risk space, employed in AI alignment, willing to do this towards your former employers, also involved in the x-risk/AI alignment space. So, while you may not want to bother with addressing these points, I imagine you will have to eventually for the sake of your reputation.
Singer’s thesis is not based exclusively on a specific set of cost-effectiveness estimates about specific causes form specific organizations, and Singer’s thesis isn’t the exclusive basis for the effective altruism movement.
Then why do Singer and CEA keep making those exaggerated claims? I don’t see why they’d do that if they didn’t think it was responsible for persuading at least some people.
Then why do Singer and CEA keep making those exaggerated claims?
I don’t know. Why don’t you ask Singer and/or the CEA?
I don’t see why they’d do that if they didn’t think it was responsible for persuading at least some people.
They probably believe it is responsible for persuading at least some people. I imagine the CEA does it through some combo of revering Singer, thinking it’s good for optics, and not thinking the level of precision at which there error is taking place is so grievous as to be objectionable in the context they’re presented in.
I don’t expect to get an honest answer to “why do you keep making dishonest claims?”, for reasons I should hope to be obvious. I had hoped I might have gotten any answer at all from you about why *you *(not Singer or CEA) claim that Singer’s thesis is not based exclusively on a specific set of cost-effectiveness estimates about specific causes form specific organizations, or why you think it’s relevant that Singer’s thesis isn’t the exclusive basis for the effective altruism movement.
I don’t recall Givewell ever claiming to be unbiased, although it is a problem for other actors in EA to treat Givewell’s cost-effectiveness estimates as unbiased.
Pretty weird that restating a bunch of things GiveWell says gets construed as an attack on GiveWell (rather than the people distorting what it says), and that people keep forgetting or not noticing those things, in directions that make giving based on GiveWell’s recommendations seem like a better deal than it is. Why do you suppose that is?
I believe it’s because people get their identities very caught up in EA, and for EAs focused on global poverty alleviation, Givewell and their recommended charities. So, when someone like you criticizes Givewell, a lot of them react in primarily emotional ways, creating a noisy space where the sound of messages like yours get lost. So, the points you’re trying to make about Givewell, or what similar points many others have tried making about Givewell, don’t stick to enough for enough of the EA community, or whoever else the relevant groups of people are. Thus, in the collective memory of the community, these things are forgotten or not noticed. Then, the cycle repeats itself each time you write another post like this.
So, EA largely isn’t about actually doing altruism effectively (which requires having correct information about what things actually work, e.g. estimates of cost per life saved, and not adding noise to conversations about these), it’s an aesthetic identity movement around GiveWell as a central node, similar to e.g. most popular environmentalism (which, for example, opposes nuclear power despite it being good for the environment, because nuclear power is discordant with the environmentalism identity/aesthetics, and Greenpeace is against it), which is also claiming credit for, literally, evaluating and acting towards the moral good (as environmentalism claims credit for evaluating and acting towards the health of the planet). This makes sense as an explanation of the sociological phenomenon, and also implies that, according to the stated values of EA, EA-as-it-is ought to be replaced with something very, very different.
[EDIT: noting that what you said in another comment also agrees with the aesthetic identity movement view: “I imagine the CEA does it through some combo of revering Singer, thinking it’s good for optics, and not thinking the level of precision at which there error is taking place is so grievous as to be objectionable in the context they’re presented in.”]
I agree with your analysis of the situation, but I wonder whether it’s possible to replace EA with anything that won’t turn into exactly the same thing. After all, the EA movement is the result of some people noticing that much of existing charity is like this, and saying “we should replace that with something very, very different”…
And EA did better than the previous things, along some important dimensions! And people attempting to do the next thing will have EA as an example to learn from, which will (hopefully) prompt them to read and understand sociology, game theory, etc. The question of “why do so many things turn into aesthetic identity movements” is an interesting and important one, and, through study of this (and related) questions, it seems quite tractable to have a much better shot at creating something that produces long-term value, than by not studying those questions.
Success is nowhere near guaranteed, and total success is quite unlikely, but, trying again (after a lot of study and reflection) seems like a better plan than just continuing to keep the current thing running.
The question of “why do so many things turn into aesthetic identity movements” is an interesting and important one, and, through study of this (and related) questions, it seems quite tractable to have a much better shot at creating something that produces long-term value, than by not studying those questions.
I agree that studying this is quite important. (If, of course, such an endeavor is entered into with the understanding that everyone around the investigators, and indeed the investigators themselves, have an interest in subverting the investigation. The level of epistemic vigilance required for the task is very unusually high.)
It is not obvious to me that further attempts at successfully building the object-level structure (or even defining the object-level structure) are warranted, prior to having substantially advanced our knowledge on the topic of the above question. (It seems like you may already agree with me, on this; I am not sure if I’m interpreting your comment correctly.)
I’m going to flip this comment on you, so you can understand how I’m seeing it, and thus I fail to see why the point you’re trying to make matters.
So, rationality largely isn’t actually about doing thinking clearly (which requires having correct information about what things actually work, e.g., well-calibrated priors, and not adding noise to conversations about these), it’s an aesthetic identify movement around HPMoR as a central node, similar to, e.g., most popular environmentalism (which, for example, opposes nuclear power despite it being good for the environment, because nuclear power is discordant with the environmentalism identity/aesthetics, and Greenpeace is against it). This makes sense as an explanation of the sociological phenomenon, and also implies that, according to the stated value of rationality, rationality-as-it-is ought to be replaced with something very, very different.
One could nitpick about how HPMoR has done much more to save a number of lives through AI alignment than Givewell has ever done through developing-world interventions, and I’ll go share that info as from Jessica Taylor in defence of (at least some of) what Ben Hoffman is trying to achieve, perhaps among other places on the public internet, and we’ll see how that goes. The point I was trying to make is that much of the rationality community has nothing to do with the community’s stated values. So, in stating as though a fact about EA your personal impression of it based on Sarah’s blog post as if that means something unique about EA that isn’t true about other human communities, you’ve argued for too much.
Also, in this comment I indicated my awareness of what was once known as the “Vassar crowd”, which I recall you were a part of:
Shall I point out to all the communities of x-risk reduction, long-term world improvement, EA, and rationality that Michael Arc/Vassar and some of his friends formed a “Vassar crowd” that formed a cell aimed at unilaterally driving a wedge between x-risk/rationality and EA, which included you, Sarah Constantin, Michael Arc, Alyssa Vance, among others? Should I not hold you or Michael Arc individually responsible for the things you’ve done since then that have caused you to have a mixed reputation, or should I castigate all of you and Michael’s friends in the bunch too, along with as much of the rationality community as I feel like? After all, you’re all friends, and you decided to make the effort together, even though you each made your own individual contributions.
While we’re here, would you mind explaining with me what all of your beef was with the EA community as misleading in myriad ways to the point of menacing x-risk reduction efforts, and other pursuits of what is true and good, without applying the same pressure to parts of the rationality community that pose the same threat, or for that matter, any other group of people who does the same? What makes EA special?
So, rationality largely isn’t actually about doing thinking clearly [...] it’s an aesthetic identity movement around HPMoR as a central node [...] This makes sense as an explanation of the sociological phenomenon, and also implies that, according to the stated value of rationality, rationality-as-it-is ought to be replaced with something very, very different.
This just seems obviously correct to me, and I think my failure to properly integrate this perspective until very recently has been extremely bad for my sanity and emotional well-being.
Specifically: if you fail to make a hard mental disinction between “rationality”-the-æsthetic-identity-movement and rationality-the-true-art-of-systematically-correct-reasoning, then finding yourself in a persistent disagreement with so-called “rationalists” about something sufficiently basic-seeming creates an enormous amount of cognitive dissonance (“Am I crazy? Are they crazy? What’s going on?? Auuuuuugh”) in a way that disagreeing with, say, secular humanists or arbitrary University of Chicago graduates, doesn’t.
But … it shouldn’t. Sure, self-identification with the “rationalist” brand name is a signal that someone knows some things about how to reason. And, so is graduating from the University of Chicago. How strong is each signal? Well, that’s an empirical question that you can’t answer by taking the brand name literally.
How can you improve your conception of rationality? Not by saying to yourself, “It is my duty to be rational.” By this you only enshrine your mistaken conception. Perhaps your conception of rationality is that it is rational to believe the words of the Great Teacher, and the Great Teacher says, “The sky is green,” and you look up at the sky and see blue. If you think: “It may look like the sky is blue, but rationality is to believe the words of the Great Teacher,” you lose a chance to discover your mistake.
Do not ask whether it is “the Way” to do this or that. Ask whether the sky is blue or green. If you speak overmuch of the Way you will not attain it.
Of course, not everyone is stupid enough to make the mistake I made—I may have been unusually delusional in the extent to which I expected “the community” to live up to the ideals expressed in our marketing literature. For an example of someone being less stupid than recent-past-me, see the immortal Scott Alexander’s comments in “The Ideology Is Not the Movement” (“[...] a tribe much like the Sunni or Shia that started off with some pre-existing differences, found a rallying flag, and then developed a culture”).
This isn’t to say that the so-called “rationalist” community is bad, by the standards of our world. This is my æsthetic identity movement, too, and I don’t see any better community to run away to—at the moment. (Though I’m keeping an eye on the Quillette people.) But if attempts to analyze how we’re collectively failing to live up to our ideals are construed as an attack, that just makes us even worse than we already are at living up to our own ideals!
(Full disclosure: uh, I guess I would also count as part of the “Vassar crowd” these days??)
But if attempts to analyze how we’re collectively failing to live up to our ideals are construed as an attack, that just makes us even worse than we already are at living up to our own ideals!
For Ben’s criticisms of EA, it’s my opinion that while I agree with many of his conclusions, I don’t agree with some of the strongest conclusions he reaches, or how he makes the arguments for them, simply because I believe they are not good arguments. This is common for interactions between EA and Ben these days, though Ben doesn’t respond to counter-arguments, as he often seems under the impression a counter-argument disagrees with Ben in a way he doesn’t himself agree with, his interlocutors are persistently acting in bad faith. I haven’t interacted directly with Ben myself as much for a while until he wrote the OP this week. So, I haven’t been following as closely how Ben construes ‘bad faith’, and I haven’t taken the opportunity to discover, if he were willing to relay it, what his model of bad faith is. I currently find some of his feelings of EAs he discusses with as acting in bad faith confusing. At least I don’t find them a compelling account of people’s real motivations in discourse.
I haven’t been following as closely how Ben construes ‘bad faith’, and I haven’t taken the opportunity to discover, if he were willing to relay it what his model of bad faith is.
Recently I’ve often found myself wishing for better (widely-understood) terminology for phenomena that it’s otherwise tempting to call “bad faith”, “intellectual dishonesty”, &c. I think it’s pretty rare for people to be consciously, deliberately lying, but motivated bad reasoning is horrifyingly ubiquitous and exhibits a lot of the same structural problems as deliberate dishonesty, in a way that’s worth distinguishing from “innocent” mistakes because of the way it responds to incentives. (As Upton Sinclair wrote, “It is difficult to get a man to understand something when his salary depends upon his not understanding it.”)
If our discourse norms require us to “assume good faith”, but there’s an important sense in which that assumption isn’t true (because motivated misunderstandings resist correction in a way that simple mistakes don’t), but we can’t talk about the ways it isn’t true without violating the discourse norm, then that’s actually a pretty serious problem for our collective sanity!
So, I’ve read the two posts on Benquo’s blog you’ve linked to. The first one “Bad Intent Is a Disposition, Not a Feeling”, depended on the claim he made that mens rea is not a real thing. As was pointed out in comments that he himself acknowledged those comments made some good points that would cause him to rethink the theme he was trying to impart with his original post. I looked up both the title of that post, and ‘mens rea’ on his blog to see if he had posted any updated thoughts on the subject. There weren’t results from the date of publication of that post onward on either of those topics on his blog, so it doesn’t appear he has publicly updated his thoughts on these topics. That was over 2 years ago.
The second post on the topic was more abstract and figurative, and was using some analogy and metaphor to get its conclusion across. So, I didn’t totally understand the relevance of all that in the second post to the first post, even though the second was intended as a sequel to the first. It seemed to me the crux of resolving the problem was:
Sadly, being honest about your sense that someone else is arguing in bad faith is Officially Not OK. It is read as a grave and inappropriate attack. And as long as that is the case, he could reasonably expect that bringing it up would lead to getting yelled at by everyone and losing the interaction. So maybe he felt and feels like he has no good options here.
Benquo’s conclusion that for public discourse and social epistemology, at least in his experience, that to be honest about your sense someone else is arguing in bad faith is Officially Not OK because it is always construed as a grave and inappropriate personal attack. So, resolving the issue appears socially or practically impossible. My experience is that just isn’t the case. It can lend itself to better modes of public discourse. One thing is it can move communities to states of discourse that are much different than where the EA and rationality communities currently are at. One problem is I’m not sure even those rationalists and EAs who are aware of such problems would prefer the options available, which would be just hopping onto different platforms with very different discourse norms. I would think that would be the most practical option, since the other viable alternative would be for these communities to adopt other communities’ discourse norms, and replace their own with them, wholesale. That seems extremely unlikely to happen.
Part of the problem is that it seems how Benquo construes ‘bad faith’ is as having an overly reductionistic definition. This was what was fleshed out in the comments on the original post on his blog, by commenters AGB and Res. So, that makes it hard for me to accept the frame Benquo bases his eventual conclusions off of. Another problem for me is the inferential distance gap between myself, Benquo, and the EA and rationality communities, respectively, are so large now that it would take a lot of effort to write them up and explain them all. Since it isn’t a super high priority for me, I’m not sure that I will get around to it. However, there is enough material in Benquo’s posts, and the discussion in the comments, that I can work with it to explain some of what I think is wrong with how he construes bad faith in these posts. If I write something like that up, I will post it on LW.
I don’t know if the EA community in large part disagrees with the OP for the same reasons I do. I think based off some of the material I have been provided with in the comments here, I have more to work with to find the cruxes of disagreement I have with how some people are thinking, whether critically or not, about the EA and rationality communities.
I understand the “Vassar Crowd” to be a group of Michael Vassar’s friends who:
were highly critical of EA.
were critical of somewhat less so of the rationality community.
were partly at odds with the bulk of the rationality community in not being as hostile to EA as they thought they should have been.
Maybe you meet those qualifications, but as I understand it the “Vassar Crowd” started publishing blog posts on LessWrong and their own personal blogs, as well as on social media, over the course of a few months starting in the latter half of 2016. It was part of a semi-coordinated effort. While I wouldn’t posit a conspiracy, it seems like a lot of these criticisms of EA were developed in conversations within this group, and, given the name of the group, I assume different people were primarily nudged by Vassar. This also precipitated of Alyssa Vance’s Long-Term World Improvement mailing list.
It doesn’t seem to have continued as a crowd to the present, as the lives of the people involved have obviously changed a lot, and it doesn’t appear from the outside it is as cohesive anymore, I assume in large part because of Vassar’s decreased participation in the community. Ben seems to be one of the only people who is sustaining the effort to criticize EA as the others were before.
So while I appreciate the disclosure, I don’t know if in my previous comment was precise enough, as far as I understand it was that the Vassar Crowd was more a limited clique that was manifested much more in the past than present.
The point I was trying to make is that much of the rationality community has nothing to do with the community’s stated values.
Yes, this is true, and also implies that the rationality community should be replaced with something very different, according to its stated goals. (Did you think I didn’t think that?)
So, in stating as though a fact about EA your personal impression of it based on Sarah’s blog post as if that means something unique about EA that isn’t true about other human communities, you’ve argued for too much.
I don’t think it’s unique! I think it’s extremely, extremely common for things to become aesthetic identity movements! This makes the phenomenon matter more, not less!
I have about as many beefs with the rationality movement as I do with the EA movement. I am commenting on this post because Ben already wrote it and I had things to add.
It’s possible that I should feel more moral pressure than I currently do to actively (not just, as a comment on other people’s posts) say what’s wrong about the current state of the rationality community publicly. I’ve already been saying things privately. (This is an invitation to try morally pressuring me, using arguments, if you think it would actually be good for me to do this)
Thanks for acknowledging my point about the rationality community. However, I was trying to get across more generally that I think the ‘aesthetic identity movement’ model might be lacking. If a theory makes the same predictions everywhere, it’s useless. I feel like the ‘aesthetic identity movement’ model might be one of those theories that is too general and not specific enough for me to understand what I’m supposed to take away from its use. For example:
So, the United States of America largely isn’t actually about being a land of freedom to which the world’s people may flock (which requires having everyone’s civil liberties consistently upheld, e.g., robust support for the rule of law, and not adding noise to conversations about these), it’s an aesthetic identify movement around the Founding Fathers as a central node, similar to, e.g., most popular environmentalism (which, for example, opposes nuclear power despite it being good for the environment, because nuclear power is discordant with the environmentalism identity/aesthetics, and Greenpeace is against it). This makes sense as an explanation of the sociological phenomenon, and also implies that, according to the stated value of America, America ought to be replaced with something very, very different.
Maybe if all kinds of things are aesthetic identity movements instead of being what htey actually say they are, I wouldn’t be as confused, if I knew what I am supposed to do with this information.
An aesthetic identity movement is one where everything is dominated by how things look on the surface, not what they actually do/mean in material reality. Performances of people having identities, not actions of people in reality. To some extent this is a spectrum, but I think there are attractor states of high/low performativity.
It’s possible for a state not to be an aesthetic identity movement, e.g. by having rule of law, actual infrastructure, etc.
It’s possible for a movement not to be an aesthetic identity movement, by actually doing the thing, choosing actions based on expected value rather than aesthetics alone, having infrastructure that isn’t just doing signalling, etc.
Academic fields have aesthetic elements, but also (some of the time) do actual investigation of reality (or, of reasoning/logic, etc) that turns up unexpected information.
Mass movements are more likely to be aesthetic identity movements than obscure ones. Movements around gaining resources through signalling are more likely to be aesthetic identity movements than ones around accomplishing objectives in material reality. (Homesteading in the US is an example of a historical movement around material reality)
(Note, EA isn’t only as aesthetic identity movement, but it is largely one, in terms of percentage of people, attention, etc; this is an important distinction)
It seems like the concept of “aesthetic identity movement” I’m using hasn’t been communicated to you well; if you want to see where I’m coming from more in more detail, read the following.
Yes, this is true, and also implies that the rationality community should be replaced with something very different, according to its stated goals. (Did you think I didn’t think that?)
I don’t think you didn’t think that. My question was to challenge you to answer why you, and the others if you would feel comfortable speaking to their perspectives, focus so much of your attention on EA instead of the rationality community (or other communities perhaps presenting the same kind and degree of problems), if you indeed understand they share similar problems, and posing similarly high stakes (e.g., failure modes of x-risk reduction).
I asked because it’s frustrating to me how inconsistent with your own efforts here to put way more pressure on EA than rationality. I’m guessing part of the reason for your trepidation in the rationality community is because you feel a sense of how much disruption it could cause, and how much risk nothing would change either. The same thing has happened when, not so much you, but some of your friends have criticized EA in the past. I was thinking it was because you are socially closer to the rationality community that you wouldn’t be as willing to criticize them.
I am not as invested in the rationality as a community as I was in the past. So, while I feel some personal responsibility to seek to analyze the intellectual failure modes of rationality, I don’t feel much of a moral urge anymore for correcting its social failure modes. So, I lack motivation to think through if it would be “good” or not for you to do it, though.
I think I actually do much more criticism of the rationality community than the EA community nowadays, although that might be invisible to you since most of it is private. (Anyway, I don’t do that much public criticism of EA either, so this seems like a strange complaint about me regardless)
Well, this was a question more about your past activity than the present activity, and also the greater activity of the same kind of some people you seem to know well, but I thought I would take the opportunity to ask you about it now. At any rate, thanks for taking the time to humour me.
My question was to challenge you to answer why you, and the others if you would feel comfortable speaking to their perspectives, focus so much of your attention on EA instead of the rationality community (or other communities perhaps presenting the same kind and degree of problems), if you indeed understand they share similar problems, and posing similarly high stakes (e.g., failure modes of x-risk reduction).
It doesn’t seem to me like anyone I interact with is still honestly confused about whether and to what extent e.g. CFAR can teach rationality, or rationality provides the promised superpowers. Whereas some people still believe a few core EA claims (like the one the OP criticizes) which I think are pretty implausible if you just look at them in conjunction and ask yourself what else would have to be true.
If you or anyone else want to motivate me to criticize the Rationality movement more, pointing me at people who continue to labor under the impression that the initial promises were achievable is likely to work; rude and condescending “advice” about how the generic reader (but not any particular person) is likely to feel the wrong way about my posts on EA is not likely to work.
So, I agree with the claim that EA has a lot of aesthetic-identity-elements going on that compound (and in many cases cause) the problem. I think that’s really important to acknowledge (although it’s not obvious that the solution needs to include starting over)
But I also think, in the case of this particular post, though, that the answer is simpler. The OP says:
Either charities like the Gates Foundation and Good Ventures are hoarding money at the price of millions of preventable deaths, or the low cost-per-life-saved numbers are wildly exaggerated. My former employer GiveWell in particular stands out as a problem here, since it publishes such cost-per-life-saved numbers
Which… sure uses language that sounds like it’s an attack on Givewell to me. seems:
[edit] The above paragraph
a) dishonest and/or false, in that it claims Givewell publishes such cost-per-life numbers, but at the moment AFAICT Givewell goes to great lengths to hide those numbers (i.e. to find the numbers of AMF you get redirected to a post about how to think about the numbers which links to a spreadsheet, which seems like the right procedure to me for forcing people to actually think a bit about the numbers)
b) uses phrases like “hoarding” and “wildly exaggerated” that I generally associate with coalition politics rather than denotive-language-that-isn’t-trying-to-be-enacting, while criticizing others for coalition politics, which seems a) like bad form, b) not like a process that I expect to result in something better-than-EA at avoiding pathologies that stem from coalition politics.
[double edit] to be clear, I do think it’s fair to criticize CEA and/or the EA community collectively for nonetheless taking the numbers as straightforward. And I think their approach to OpenAI deserves, at the very least, some serious scrutiny. (Although I think Ben’s claims about how off they are are overstated. This critique by Kelsey seems pretty straightforwardly true to me. AFAICT in this post Ben has made a technical error approximately of the same order of magnitude of what he’s claiming others are making)
My comment was a response to Evan’s, in which he said people are reacting emotionally based on identity. Evan was not explaining people’s response by referring to actual flaws in Ben’s argumentation, so your explanation is distinct from Evan’s.
a) GiveWell does publish cost-effectiveness estimates. I found them in a few clicks. So Ben’s claim is neither dishonest nor false.
b) So, the fact that you associate these phrases with coalitional politics, means Ben is attacking GiveWell? What? These phrases have denotative meanings! They’re pretty clear to determine if you aren’t willfully misinterpreting them! The fact that things that have clear denotative meanings get interpreted as attacking people is at the core of the problem!
To say that Ben creating clarity about what GiveWell is doing is an attack on GiveWell, is to attribute bad motives to GiveWell. It says that GiveWell wants to maintain a positive impression of itself, regardless of the facts, i.e. to defraud nearly everyone. (If GiveWell wants correct information about charities and charity evaluations to be available, then Ben is acting in accordance with their interests [edit: assuming what he’s saying is true], i.e. the opposite of attacking them).
Perhaps you endorse attributing bad motives to GiveWell, but in that case it would be hypocritical to criticize Ben for doing things that could be construed as doing that.
a) GiveWell does publish cost-effectiveness estimates. I found them in a few clicks. So Ben’s claim is neither dishonest nor false.
While I agree that this is a sufficient rebuttal of Ray’s “dishonest and/or false” charge (Ben said that GiveWell publishes such numbers, and GiveWell does, in fact, publish such numbers), it seems worth acknowleding Ray’s point about context and reduced visibility: it’s not misleading to publish potentially-untrustworthy (but arguably better than nothing) numbers surrounded by appropriate caveats and qualifiers, even when it would be misleading to loudly trumpet the numbers as if they were fully trustworthy.
That said, however, Ray’s “GiveWell goes to great lengths to hide those numbers” claim seems false to me in light of an email I received from GiveWell today (the occasion of my posting this belated comment), which reads, in part:
GiveWell has made a lot of progress since your last recorded gift in 2015. Our current top charities continue to avert deaths and improve lives each day, and are the best giving opportunities we’re aware of today. To illustrate, right now we estimate that for every $2,400 donated to Malaria Consortium for its seasonal malaria chemoprevention program, the death of a child will be averted.
Further update on this. Givewell has since posted this blogpost. I haven’t yet reviewed this enough have a strong opinion on it, but I think it at least explains some of the difference in epistemic state I had at the time of this discussion.
A few years ago, we decided not to feature our cost-effectiveness estimates prominently on our website. We had seen people using our estimates to make claims about the precise cost to save a life that lost the nuances of our analysis; it seemed they were understandably misinterpreting concrete numbers as conveying more certainty than we have. After seeing this happen repeatedly, we chose to deemphasize these figures. We continued to publish them but did not feature them prominently.
Over the past few years, we have incorporated more factors into our cost-effectiveness model and increased the amount of weight we place on its outputs in our reviews (see the contrast between our 2014 cost-effectiveness model versus our latest one). We thus see our cost-effectiveness estimates as important and informative.
We also think they offer a compelling motivation to donate. We aim to share these estimates in such a way that it’s reasonably easy for anyone who wants to dig into the numbers to understand all of the nuances involved.
These phrases have denotative meanings! They’re pretty clear to determine if you aren’t willfully misinterpreting them! The fact that things that have clear denotative meanings get interpreted as attacking people is at the core of the problem!
I wonder if it would help to play around with emotive conjugation? Write up the same denotative criticism twice, once using “aggressive” connotations (“hoarding”, “wildly exaggerated”) and again using “softer” words (“accumulating”, “significantly overestimated”), with a postscript that says, “Look, I don’t care which of these frames you pick; I’m trying to communicate the literal claims common to both frames.”
Claims to the contrary are either obvious nonsense, or marketing copy by the same people who brought you the obvious nonsense.
In most contexts when language liked this is used, it’s usually pretty clear that you are implying someone is doing something closer to deliberately lying than some softer kind of deception. I am aware Ben might have some model about how Givewell or others in EA are acting in bad faith in some other manner, involving self-deception. If that is what he is implying that Givewell or Good Ventures are doing instead of deliberately lying, that isn’t clear from the OP. He could have also stated the organizations in question are not fully aware they’re just marketing obvious nonsense, and had been immune to his attempts to point this out to them. If that is the case, but he didn’t state that in the OP either.
So, based on their prior experience, I believe it would appear to many people like he was implying Givewell, Good Ventures, and EA are deliberately lying. Deliberate lying is generally seen as a bad thing. So, to imply someone is deliberately lying seems to clearly be an attribution of bad motives to others. So if Ben didn’t expect or think that is how people would construe part of what he was trying to say, I don’t know what he was going for.
I think the current format isn’t good venue for me to continue the current discussion. For now, roughly, I disagree with the framing in your most recent comment, and stand by my previous comment.
I’ll try to write up a top level post that outlines more of my thinking here. I’d have some interest in a private discussion that gets turned into a google doc that gets turned into a post, or possibly some other format. I think public discussion threads are a uniquely bad format for this sort of thing.
I haven’t read your entire series of posts on Givewell and effective altruism. So I’m basing this comment mostly off of just this post. It seems like it is jumping all over the place.
You say:
This sets up a false dichotomy. Both the Gates Foundation and Good Ventures are focused on areas in addition to funding interventions in the developing world. Obviously, they both believe those other areas, e.g., in Good Ventures’ case, existential risk reduction, present them with the opportunity to prevent just as many, if not more, deaths than interventions in the developing world. Of course, a lot of people disagree with the idea something like AI alignment, which Good Ventures funds, is in any way comparable to cost-effective interventions into the developing world in terms of how many deaths it prevents, its cost-effectiveness, or its moral value. Yet based on how you used to work for Givewell, and you’re now much more focused on AI alignment, it doesn’t seem like you’re one of those people.
If you were one of those people, you would be the kind of person to think that Good Ventures not spending all their money on developing-world interventions, and instead spreading out their grants over time to shape the longer-term future in terms of AI safety and other focus areas, quite objectionable. If you are that kind of person, i.e., you believe it is indeed objectionable Good Ventures is, from the viewpoint of thinking their top priority should be developing-world interventions, ‘hoarding’ their money for other focus areas like AI alignment is objectionable, that is not at all clear or obvious.
Unless you believe that, then, right here, there is a third option other than “Gates Foundation and Good Ventures are hoarding money at the price of millions of deaths”, and the “numbers are wildly exaggerated”. That is, both foundations believe the money they are reserving for focus areas other than developing-world interventions aren’t being hoarded at the expense of millions of lives. Presumably, this is because both foundations also believe the counterfactual expected value of these other focus areas is at least comparable to the expected value of developing-world interventions.
If the Gates Foundation and Good Ventures appear not to, across the proportions of their endowments they’ve respectively allotted for developing-world interventions and other focus areas, be not giving away their money as quickly as they could while still being as effective as possible, then you objecting to it would make sense. However, that would be a separate thesis that you haven’t covered in this post. Were you to put forward such a thesis, you’ve already laid out the case for what’s wrong with a foundation like Good Ventures not fully funding each year the developing-world interventions of Givewell’s recommended charities.
Yet you would still need to make additional arguments for what Good Ventures is doing wrong in only granting to another focus area like AI alignment as much as they annually are now, instead of grantmaking at a much higher annual rate or volume. Were you to do that, it would be appropriate to point out what is wrong with the reasons an organization like the Open Philanthropy Project (Open Phil) doesn’t grant much more to their other focus areas each year.
For example, one reason it wouldn’t make sense for AI alignment to be granting in total each year 100x as much to AI risk as they are now, starting this year, is because it’s not clear AI risk as a field currently has that much room for more funding. It is at least not clear AI risk organizations could sustain such a high growth rate assuming their grants from Open Phil were 100x bigger than they are now. That’s an entirely different point than any you made in this post. Also, as far as I’m aware, that isn’t an argument you’ve made anywhere else.
Given that you are presumably familiar with these considerations, it seems to me you should have been able to anticipate the possibility of the third option. In other words, unless you’re going to make the case that either:
it is objectionable for a foundation like Good Ventures to reserve some of their endowment for the long-term development of a focus area like AI risk, instead of using it all to fund cost-effective developing-world interventions, and/or;
it is objectionable Good Ventures isn’t funding AI alignment more than they currently are, and why;
you should have been able to tell in advance the dichotomy you presented is indeed a false one. It seems like of the two options in the dichotomy you presented, you believe cost-effectiveness estimates like those from Givewell are wildly exaggerated. I don’t know why you presented it as though you thought it might just as easily be one of the two scenarios you presented, but the fact you’re exactly the kind of person who should have been able to anticipate a plausible third scenario and didn’t undermines the point you’re trying to make.
One thing that falls out of my above commentary is that since it is not clearly the case that is only one of the two scenarios you presented is true, it is not necessarily the case either that the mentioned cost-effectiveness estimates “have to be interpreted as marketing copy designed to control your behaviour”. What’s more, you’ve presented another false dichotomy here. It is not the case Givewell’s cost-effectiveness estimates must be only and exclusively one of either:
severely distorted marketing copy designed for behavioural control.
unbiased estimates designed to improve the quality of your decision-making process.
Obviously, Givewell’s estimates aren’t unbiased. I don’t recall Givewell ever claiming to be unbiased, although it is a problem for other actors in EA to treat Givewell’s cost-effectiveness estimates as unbiased. I recall from reading a couple posts from you series on Givewell it seemed as though you were trying to hold Givewell responsible for the exaggerated rhetoric made by others in EA using Givewell’s cost-effectiveness estimates. It seems like you’re doing that again now. I never understood then, and I don’t understand now, why you’ve tried explaining all this as if Givewell is responsible for how other people are misusing their numbers. Perhaps Givewell should do more to discourage a culture of exaggeration and bluster in EA built on people using their cost-effectiveness estimates and prestige as a charity evaluator to make claims about developing-world interventions that aren’t actually backed up by Givewell’s research and analysis.
Yet that is another, different argument you would have to make, and one that you didn’t. To hold Givewell as exclusively culpable for how their cost-effectiveness estimates and analyses have been misused as you have, in the past and present, would only be justified by some kind of evidence Givewell is actively trying to cultivate a culture of exaggeration and bluster and shiny-distraction-via-prestige around themselves. I’m not saying no such evidence exists, but if it does, you haven’t presented any of it.
You make this claim as though it might be the exact same people in the organizations of Givewell, Open Phil, and Good Ventures who are responsible for all the following decisions:
presenting Givewell’s cost-effectiveness estimates in the way they do.
making recommendations to Good Ventures via Givewell about how much Good Ventures should grant to each of Givewell’s recommended charities.
Good Ventures’ stake in OpenAI.
However, it isn’t the same people making all of these decisions across these 3 organizations.
Dustin Moskowitz and Cari Tuna are ultimately responsible for what kinds of grants Good Ventures makes, regardless of focus area, but they obviously delegate much decision-making to Open Phil.
Good Ventures obviously has tremendous influence over how Givewell conducts their research and analysis to reach particular cost-effectiveness estimates, but by all appearances Good Ventures appears to have let Givewell operate with a great deal of autonomy, and haven’t been trying to influence Givewell to dramatically alter how they conduct their research and analysis. Thus, it would make sense to look to Givewell, and not Good Ventures, for what to make of their research and analysis.
Elie Hassenfeld is the current executive director of Givewell, and thus is the one to be held ultimately accountable for Givewell’s cost-effectiveness estimates, and recommendations to Good Ventures. Holden Karnofsky is a co-founder of Givewell, but for a long time has been focusing full-time on his role as executive director of Open Phil. Holden no longer co-directs Givewell with Elie.
As ED of Open Phil, Holden has spearheaded Open Phil’s work in, and Good Ventures’ funding of, AI risk research.
That their is a division of labour whereby Holden has led Open Phil’s work, and Elie Givewell’s, has been common knowledge in the effective altruism movement for a long time.
What many people disagreed with about Open Phil recommending Good Ventures take a stake in OpenAI, and Holden Karnofsky consequently being made a Board Member of Open Phil, is based on the particular roles played by the people involved in the grant investigation that I won’t go through here. Also, like yourself, on the expectation OpenAI may make the state of things in AI risk worse rather than better, based on either OpenAI’s ignorance or misunderstanding of how AI alignment research should be conducted, at least in the eyes of many people in the rationality and x-risk reduction communities.
The assertion Givewell is wildly exaggerating their cost-effectiveness estimates is an assertion the numbers are being fudged at a different organization than Open Phil. The common denominator is of course that Good Ventures made grants made on recommendations from both Open Phil and Givewell. Holden and Elie are co-founders of both Open Phil and Givewell. However, with the two separate cases of Givewell’s cost-effectiveness estimates, and Open Phil’s process for recommending Good Ventures take a stake in OpenAI, it is two separate organizations, run by two separate teams, led separately by Elie and Holden respectively. If in each of the cases you present of Givewell, and Open Phil’s support for OpenAI, something wrong has been done, they are two very different kinds of mistakes made for very different reasons.
Again, Good Ventures is ultimately accountable for grants made in both cases. You could hold each organization accountable separately, but when you refer to them as the “same parties”, you’re making it out as though Good Ventures, and their satellite organizations, are either, generically, incompetent or dishonest. I say ” generically”, because while you set it up that way, you know as well as anyone the specific ways in which the two cases of Givewell’s estimates, and Open Phil’s/Good Venture’s relationship with OpenAI, differ. You know this because you have been one of, if not the most, prominent individual critic in both cases for the last few years.
Yet when you call them all the “same parties”, you’re treating both cases as if the ‘family’ of Good Ventures and surrounding organizations generally can’t be trusted, because it’s opaque to us how they come to make these decisions that lead to dishonest or mistaken outcomes as you’ve alleged. Yet you’re one of the people who made clear to everyone else how the decisions were made; who were the different people/organizations who made the decisions; and what one might find objectionable about them.
To substantiate the claim the two different cases of Givewell’s estimates, and Open Phil’s relationship to OpenAI, are sufficient grounds to reach the conclusion none of these organizations, nor their parent foundation Good Ventures, can generally be trusted, you could have held Good Ventures accountable for not being diligent enough in monitoring the fidelity of the recommendations they receive from either Givewell and Open Phil. Yet you didn’t do that. You could have also, now or in the past, tried to make the arguments Givewell and Open Phil should each separately be held accountable for what you see as their mistakes on in the two separate cases. Yet you didn’t do that either.
Making any of those arguments would have made sense. Yet what you did is you treated it as though Givewell, Open Phil, and Good Ventures all play the same kind of role in both cases. Not even all 3 organizations are involved in both cases. To summarize all this, the two cases of Givewell’s estimates, and Open Phil’s relationship to OpenAI, if they are problematic, are not the same kinds of problems caused by Good Ventures for the same reasons. Yet you’re making it out as though they are.
It might make more sense if you were someone else who just saw the common connection of Good Ventures, and didn’t know how to go about criticizing them other than to point out they were sloppy in both cases. Yet you know everything I’ve mentioned about who the different people are in each of the two cases, and the different kinds of decisions each organization is responsible for, and how they differ in how they make those decisions. So, you know how to hold each organization separately accountable for what you see as their separate mistakes. You know these things because you:
identified as an effective altruist for several years.
have been a member of the rationality community for several years.
are a former employee of Givewell.
have transitioned since you’ve left Givewell to focusing more of your time on AI alignment.
Yet you make it out as though Good Ventures, Givewell, and Open Phil are some unitary blob that makes poor decisions. If you wanted to make any one of, or even all, the other specific, alternative arguments I suggested about how to hold each of the 3 organizations individually accountable, it would have been a lot easier for you to make a solid and convincing argument than the one you’ve actually made regarding these organizations. Yet because you didn’t, this is another instance of you undermining what you yourself are trying to accomplish with a post like this.
You started this post off with what’s wrong with Peter Singer’s cost-effectiveness estimates from his 1997 essay. Then you pointed out what you see as being wrong similarly done by specific EA-aligned organizations today. Then you bridge to how, because funding gaps are illusory given the erroneous cost-effectiveness estimates, the Gates Foundation and Good Ventures are doing much less than they should with regards to developing-world interventions.
Then, you zoom in on what you see as the common pattern of bad recommendations being given to Good Ventures by Open Phil and Givewell. Yet the two cases of recommendations you’ve provided are from these 2 separate organizations who make their decisions and recommendations in very different ways, and are run by 2 different teams of staff, as I pointed out above. And as it’s I’ve established you’ve known all this in intimate detail for years, you’re making arguments that make much less sense than the ones you could have made based on the information available to you.
None of that has anything to do with the Gates Foundation. You told me in response to another comment I made on this post that it was another recent discussion on LW where the Gates Foundation came up that inspired you to make this post. You made your point about the Gates Foundation. Then, that didn’t go anywhere, because you made unrelated points about unrelated organizations.
For the record, when you said:
and
none of that applies to the Gates Foundation, because the Gates Foundation isn’t an EA-aligned organization “mass-marketing high cost effectiveness representations” in a bid to get small, individual donors to build up a mass movement of effective charitable giving to fill illusory funding gaps they could easily fill themselves. Other things being equal, the Gates Foundation could obviously fill the funding gap. None of the rest of those things apply to the Gates Foundation, though, and they would have to for it to make sense that this post, and its thesis, were inspired by mistakes being made by the Gates Foundation, not just EA-aligned organizations.
However, going back to “the generic utilitarian imperative to live like a monk and give all your excess money to the global poor or something even more urgent”, it seems like you’re claiming the thesis of Singer’s 1997 essay, and the basis for effective altruism as a movement(?), are predicated exclusively on reliably nonsensical cost-effectiveness estimates from Givewell/Open Phil, not just for developing-world interventions, but in general. None of that is true, because Singer’s thesis is not based exclusively on a specific set of cost-effectiveness estimates about specific causes form specific organizations, and Singer’s thesis isn’t the exclusive basis for the effective altruism movement. Even if that was a logically valid argument, your conclusion would not be sound either way, because, as I’ve pointed out above, the premise that it makes sense to treat Givewell, Open Phil, and Good Ventures, like a unitary actor, is false.
In other words, because ” mass-marketed high-cost-effectiveness representations” are not the foundation of “the generic utilitarian imperative to live like a monk and give all your excess money to the global poor or something even more urgent” in general, and certainly isn’t some kind of primary basis for effective altruism if that was something you were suggesting, your conclusion destroys nothing.
To summarize:
you knowingly presented a false dichotomy about why the Gates Foundation and Good Ventures don’t donate their entire endowments to developing-world interventions.
you knowingly set up a false dichotomy whereby everyone has been acting the whole time as if it’s the case Givewell’s and Open Phil’s cost-effectiveness estimates are unbiased, or the reason they are wildly exaggerated is because those organizations are trying to deliberately manipulate people’s behaviour.
you cannot deny you could not have been cognizant of the fact these dichotomies are false, because the evidence with which you present them are your own prior conclusions drawn in part from your personal and professional experiences.
you said this post was inspired by the point you made about the Gates Foundation, but that has nothing to do with the broader arguments you’ve made about Good Ventures, Open Phil, or Givewell, and those arguments don’t back the conclusion you’ve consequently drawn about utilitarianism and effective altruism.
In this post, you’ve raised some broad concerns of things happening in the effective altruism movement I think are worth serious consideration.
I don’t believe the rationale for why Givewell doesn’t recommend to Good Ventures to fully fund Givewell’s top charities totally holds up, and I’d like to understand better why they don’t. I think Givewell maybe should recommend Good Ventures fully fund their own top charities each year.
That EA has a tendency to move people in a direction too far away from these more ordinary and concrete aspects of their lives is a valid one.
I am also unhappy with much of what has happened relating to OpenAI.
All these are valid concerns that would be much easier to take more seriously from you if you presented arguments for them on their own, as opposed to presenting them as a few of many different assertions that related to each other, at best, in a very tenuous manner, in a big soup of an argument against effective altruism that doesn’t logically hold up, based on the litany of unresolved issues with it I’ve pointed out above. It’s also not clear why you wouldn’t have realized any of this before you made this post, based on all the knowledge that served as the evidence you used for your premises in this post you had before you made this post, as it was information you yourself published on the internet.
Even if all the apparent leaps of logic you’ve made in this post are artifacts of this post being a truncated summary of your entire, extensive series of posts on Givewell, and EA, the entire structure of this one post undermines the point(s) you’re trying to make with it.
I think I can summarize my difficulties with this comment a bit better now.
(1) It’s quite long, and brings up many objections that I dealt with in detail in the longer series I linked to. There will always be more excuses someone can generate that sound facially plausible if you don’t think them through. One has to limit scope somehow, and I’d be happy to get specific constructive suggestions about how to do that more clearly.
(2) You’re exaggerating the extent to which Open Philanthropy Project, Good Ventures, and GiveWell, have been separate organizations. The original explanation of the partial funding decision—which was a decision about how to recommend allocating Good Ventures’s capital—was published under the GiveWell brand, but under Holden’s name. My experience working for the organizations was broadly consistent with this. If they’ve since segmented more, that sounds like an improvement, but doesn’t help enough with the underlying revealed preferences problem.
I don’t know that this suggestion is best – it’s a legitimately hard problem – but a policy I think would be pretty reasonable is:
When responding to lengthy comments/posts that include at least 1-2 things you know you dealt with in a longer series, one option is to simply leave it at: “hmm, I think it’d make more sense for you to read through this longer series and think carefully about it before continuing the discussion” rather than trying to engage with any specific points.
And then shifting the whole conversation into a slower mode, where people are expected to take a day or two in between replies to make sure they understand all the context.
(I think I would have had similar difficulty responding to Evan’s comment as what you describe here)
To clarify a bit—I’m more confused about how to make the original post more clearly scope-limited, than about how to improve my commenting policy.
Evan’s criticism in large part deals with the facts that there are specific possible scenarios I didn’t discuss, which might make more sense of e.g. GiveWell’s behavior. I think these are mostly not coherent alternatives, just differently incoherent ones that amount to changing the subject.
It’s obviously not possible to discuss every expressible scenario. A fully general excuse like “maybe the Illuminati ordered them to do it as part of a secret plot,” for instance, doesn’t help very much, since that posits an exogenous source of complications that isn’t very strongly constrained by our observations, and doesn’t constrain our future anticipations very well. We always have to allow for the possibility that something very weird is going on, but I think “X or Y” is a reasonable short hand for “very likely, X or Y” in this context.
On the other hand, we can’t exclude scenarios arbitrarily. It would have been unreasonable for me, on the basis of the stated cost-per-life-saved numbers, to suggest that the Gates Foundation is, for no good reason, withholding money that could save millions of lives this year, when there’s a perfectly plausible alternative—that they simply don’t think this amazing opportunity is real. This is especially plausible when GiveWell itself has said that its cost per life saved numbers don’t refer to some specific factual claim.
“Maybe partial funding because AI” occurred to enough people that I felt the need to discuss it in the long series (which addressed all the arguments I’d heard up to that point), but ultimately it amounts to a claim that all the discourse about saving “dozens of lives” per donor is beside the point since there’s a much higher-leverage thing to allocate funds to—in which case, why even engage with the claim in the first place?
Any time someone addresses a specific part of a broader issue, there will be countless such scope limitations, and they can’t all be made explicit in a post of reasonable length.
They share a physical office! Good Ventures pays for it! I’m not going to bother addressing comments this long in depth when they’re full of basic errors like this.
For the record, this is no longer going to be true starting in I think about a month, since GiveWell is moving to Oakland and Open Phil is staying in SF.
Otherwise, here is what I was trying to say:
1. Givewell focuses on developing-world interventions, and not AI alignment, or any other focus area of Open Phil other than developing-world interventions, which means they’re aren’t responsible for anything to do with OpenAI.
2. It’s unclear from you what write what role, if any, Open Phil plays in the relationship between Givewell and Good Ventures in Givewell’s annual recommendations to Good Ventures. If it was clear Open Phil was an intermediary in that regard somehow, then you treating all 3 projects under 1 umbrella as 1 project with no independence between any of them might make sense. You didn’t establish that, so it doesn’t make sense.
3. Good Ventures signs off on all the decisions Givewell and Open Phil make, and they should be held responsible for the decisions of both Givewell and Open Phil. Yet you know that that there are people who work for Givewell and Open Phil who make decisions that are completed before Good Ventures signs off on them. Or I assume you do, since you worked for Givewell. If you somehow know it’s all-top down both ways, that Good Ventures tells Open Phil and Givewell each what they want from them, and Open Phil and Givewell just deliver the package, then say so.
Yes, they do share the same physical office. Yes, Good Ventures pays for it. Shall I point to mistakes made by one of MIRI, CFAR, or LW, but not more than one, and then link the mistake made, whenever, and however tenuously, to all of those organizations?
Should I do the same to any two or more other AI alignment/x-risk organizations you favour, who share offices or budgets in some way?
Shall I point out to all the communities of x-risk reduction, long-term world improvement, EA, and rationality that Michael Arc/Vassar and some of his friends formed a “Vassar crowd” that formed a cell aimed at unilaterally driving a wedge between x-risk/rationality and EA, which included you, Sarah Constantin, Michael Arc, Alyssa Vance, among others? Should I not hold you or Michael Arc individually responsible for the things you’ve done since then that have caused you to have a mixed reputation, or should I castigate all of you and Michael’s friends in the bunch too, along with as much of the rationality community as I feel like? After all, you’re all friends, and you decided to make the effort together, even though you each made your own individual contributions.
I won’t do those things. Yet that is what it would be for me to behave as you are behaving. I’ll ask you one more question about what you might do: when can I expect you to publicly condemn FHI on the grounds it’s justified to do so because FHI is right next door to CEA, yet Nick Bostrom lacks the decency to go over there and demand the CEA stop posting misleading stats, lest FHI break with the EA community forevermore?
While there is what you see as at least one error in my post, there are many items I see as errors in your post I will bring to everyone’s attention. It will be revised, edited, and polished to not have what errors you see in it, or at least it will be clear enough what I am and am not saying won’t be ambiguous. It will be a top-level article on both the EA Forum and LW. A large part of it is going to be that you at best are using extremely sloppy arguments, and at worst are making blatant attempts to use misleading info to convince others to do what you want, just as you accuse Good Ventures, Open Phil, and Givewell of doing. One theme will be that you’re still in the x-risk space, employed in AI alignment, willing to do this towards your former employers, also involved in the x-risk/AI alignment space. So, while you may not want to bother with addressing these points, I imagine you will have to eventually for the sake of your reputation.
Then why do Singer and CEA keep making those exaggerated claims? I don’t see why they’d do that if they didn’t think it was responsible for persuading at least some people.
I don’t know. Why don’t you ask Singer and/or the CEA?
They probably believe it is responsible for persuading at least some people. I imagine the CEA does it through some combo of revering Singer, thinking it’s good for optics, and not thinking the level of precision at which there error is taking place is so grievous as to be objectionable in the context they’re presented in.
I don’t expect to get an honest answer to “why do you keep making dishonest claims?”, for reasons I should hope to be obvious. I had hoped I might have gotten any answer at all from you about why *you *(not Singer or CEA) claim that Singer’s thesis is not based exclusively on a specific set of cost-effectiveness estimates about specific causes form specific organizations, or why you think it’s relevant that Singer’s thesis isn’t the exclusive basis for the effective altruism movement.
Pretty weird that restating a bunch of things GiveWell says gets construed as an attack on GiveWell (rather than the people distorting what it says), and that people keep forgetting or not noticing those things, in directions that make giving based on GiveWell’s recommendations seem like a better deal than it is. Why do you suppose that is?
I believe it’s because people get their identities very caught up in EA, and for EAs focused on global poverty alleviation, Givewell and their recommended charities. So, when someone like you criticizes Givewell, a lot of them react in primarily emotional ways, creating a noisy space where the sound of messages like yours get lost. So, the points you’re trying to make about Givewell, or what similar points many others have tried making about Givewell, don’t stick to enough for enough of the EA community, or whoever else the relevant groups of people are. Thus, in the collective memory of the community, these things are forgotten or not noticed. Then, the cycle repeats itself each time you write another post like this.
So, EA largely isn’t about actually doing altruism effectively (which requires having correct information about what things actually work, e.g. estimates of cost per life saved, and not adding noise to conversations about these), it’s an aesthetic identity movement around GiveWell as a central node, similar to e.g. most popular environmentalism (which, for example, opposes nuclear power despite it being good for the environment, because nuclear power is discordant with the environmentalism identity/aesthetics, and Greenpeace is against it), which is also claiming credit for, literally, evaluating and acting towards the moral good (as environmentalism claims credit for evaluating and acting towards the health of the planet). This makes sense as an explanation of the sociological phenomenon, and also implies that, according to the stated values of EA, EA-as-it-is ought to be replaced with something very, very different.
[EDIT: noting that what you said in another comment also agrees with the aesthetic identity movement view: “I imagine the CEA does it through some combo of revering Singer, thinking it’s good for optics, and not thinking the level of precision at which there error is taking place is so grievous as to be objectionable in the context they’re presented in.”]
I agree with your analysis of the situation, but I wonder whether it’s possible to replace EA with anything that won’t turn into exactly the same thing. After all, the EA movement is the result of some people noticing that much of existing charity is like this, and saying “we should replace that with something very, very different”…
And EA did better than the previous things, along some important dimensions! And people attempting to do the next thing will have EA as an example to learn from, which will (hopefully) prompt them to read and understand sociology, game theory, etc. The question of “why do so many things turn into aesthetic identity movements” is an interesting and important one, and, through study of this (and related) questions, it seems quite tractable to have a much better shot at creating something that produces long-term value, than by not studying those questions.
Success is nowhere near guaranteed, and total success is quite unlikely, but, trying again (after a lot of study and reflection) seems like a better plan than just continuing to keep the current thing running.
I agree that studying this is quite important. (If, of course, such an endeavor is entered into with the understanding that everyone around the investigators, and indeed the investigators themselves, have an interest in subverting the investigation. The level of epistemic vigilance required for the task is very unusually high.)
It is not obvious to me that further attempts at successfully building the object-level structure (or even defining the object-level structure) are warranted, prior to having substantially advanced our knowledge on the topic of the above question. (It seems like you may already agree with me, on this; I am not sure if I’m interpreting your comment correctly.)
I’m going to flip this comment on you, so you can understand how I’m seeing it, and thus I fail to see why the point you’re trying to make matters.
One could nitpick about how HPMoR has done much more to save a number of lives through AI alignment than Givewell has ever done through developing-world interventions, and I’ll go share that info as from Jessica Taylor in defence of (at least some of) what Ben Hoffman is trying to achieve, perhaps among other places on the public internet, and we’ll see how that goes. The point I was trying to make is that much of the rationality community has nothing to do with the community’s stated values. So, in stating as though a fact about EA your personal impression of it based on Sarah’s blog post as if that means something unique about EA that isn’t true about other human communities, you’ve argued for too much.
Also, in this comment I indicated my awareness of what was once known as the “Vassar crowd”, which I recall you were a part of:
While we’re here, would you mind explaining with me what all of your beef was with the EA community as misleading in myriad ways to the point of menacing x-risk reduction efforts, and other pursuits of what is true and good, without applying the same pressure to parts of the rationality community that pose the same threat, or for that matter, any other group of people who does the same? What makes EA special?
This just seems obviously correct to me, and I think my failure to properly integrate this perspective until very recently has been extremely bad for my sanity and emotional well-being.
Specifically: if you fail to make a hard mental disinction between “rationality”-the-æsthetic-identity-movement and rationality-the-true-art-of-systematically-correct-reasoning, then finding yourself in a persistent disagreement with so-called “rationalists” about something sufficiently basic-seeming creates an enormous amount of cognitive dissonance (“Am I crazy? Are they crazy? What’s going on?? Auuuuuugh”) in a way that disagreeing with, say, secular humanists or arbitrary University of Chicago graduates, doesn’t.
But … it shouldn’t. Sure, self-identification with the “rationalist” brand name is a signal that someone knows some things about how to reason. And, so is graduating from the University of Chicago. How strong is each signal? Well, that’s an empirical question that you can’t answer by taking the brand name literally.
I thought the “rationalist” æsthetic-identity-movement’s marketing literature expressed this very poetically—
Of course, not everyone is stupid enough to make the mistake I made—I may have been unusually delusional in the extent to which I expected “the community” to live up to the ideals expressed in our marketing literature. For an example of someone being less stupid than recent-past-me, see the immortal Scott Alexander’s comments in “The Ideology Is Not the Movement” (“[...] a tribe much like the Sunni or Shia that started off with some pre-existing differences, found a rallying flag, and then developed a culture”).
This isn’t to say that the so-called “rationalist” community is bad, by the standards of our world. This is my æsthetic identity movement, too, and I don’t see any better community to run away to—at the moment. (Though I’m keeping an eye on the Quillette people.) But if attempts to analyze how we’re collectively failing to live up to our ideals are construed as an attack, that just makes us even worse than we already are at living up to our own ideals!
(Full disclosure: uh, I guess I would also count as part of the “Vassar crowd” these days??)
For Ben’s criticisms of EA, it’s my opinion that while I agree with many of his conclusions, I don’t agree with some of the strongest conclusions he reaches, or how he makes the arguments for them, simply because I believe they are not good arguments. This is common for interactions between EA and Ben these days, though Ben doesn’t respond to counter-arguments, as he often seems under the impression a counter-argument disagrees with Ben in a way he doesn’t himself agree with, his interlocutors are persistently acting in bad faith. I haven’t interacted directly with Ben myself as much for a while until he wrote the OP this week. So, I haven’t been following as closely how Ben construes ‘bad faith’, and I haven’t taken the opportunity to discover, if he were willing to relay it, what his model of bad faith is. I currently find some of his feelings of EAs he discusses with as acting in bad faith confusing. At least I don’t find them a compelling account of people’s real motivations in discourse.
I think the most relevant post by Ben here is “Bad Intent Is a Disposition, Not a Feeling”. (Highly recommended!)
Recently I’ve often found myself wishing for better (widely-understood) terminology for phenomena that it’s otherwise tempting to call “bad faith”, “intellectual dishonesty”, &c. I think it’s pretty rare for people to be consciously, deliberately lying, but motivated bad reasoning is horrifyingly ubiquitous and exhibits a lot of the same structural problems as deliberate dishonesty, in a way that’s worth distinguishing from “innocent” mistakes because of the way it responds to incentives. (As Upton Sinclair wrote, “It is difficult to get a man to understand something when his salary depends upon his not understanding it.”)
If our discourse norms require us to “assume good faith”, but there’s an important sense in which that assumption isn’t true (because motivated misunderstandings resist correction in a way that simple mistakes don’t), but we can’t talk about the ways it isn’t true without violating the discourse norm, then that’s actually a pretty serious problem for our collective sanity!
So, I’ve read the two posts on Benquo’s blog you’ve linked to. The first one “Bad Intent Is a Disposition, Not a Feeling”, depended on the claim he made that mens rea is not a real thing. As was pointed out in comments that he himself acknowledged those comments made some good points that would cause him to rethink the theme he was trying to impart with his original post. I looked up both the title of that post, and ‘mens rea’ on his blog to see if he had posted any updated thoughts on the subject. There weren’t results from the date of publication of that post onward on either of those topics on his blog, so it doesn’t appear he has publicly updated his thoughts on these topics. That was over 2 years ago.
The second post on the topic was more abstract and figurative, and was using some analogy and metaphor to get its conclusion across. So, I didn’t totally understand the relevance of all that in the second post to the first post, even though the second was intended as a sequel to the first. It seemed to me the crux of resolving the problem was:
Benquo’s conclusion that for public discourse and social epistemology, at least in his experience, that to be honest about your sense someone else is arguing in bad faith is Officially Not OK because it is always construed as a grave and inappropriate personal attack. So, resolving the issue appears socially or practically impossible. My experience is that just isn’t the case. It can lend itself to better modes of public discourse. One thing is it can move communities to states of discourse that are much different than where the EA and rationality communities currently are at. One problem is I’m not sure even those rationalists and EAs who are aware of such problems would prefer the options available, which would be just hopping onto different platforms with very different discourse norms. I would think that would be the most practical option, since the other viable alternative would be for these communities to adopt other communities’ discourse norms, and replace their own with them, wholesale. That seems extremely unlikely to happen.
Part of the problem is that it seems how Benquo construes ‘bad faith’ is as having an overly reductionistic definition. This was what was fleshed out in the comments on the original post on his blog, by commenters AGB and Res. So, that makes it hard for me to accept the frame Benquo bases his eventual conclusions off of. Another problem for me is the inferential distance gap between myself, Benquo, and the EA and rationality communities, respectively, are so large now that it would take a lot of effort to write them up and explain them all. Since it isn’t a super high priority for me, I’m not sure that I will get around to it. However, there is enough material in Benquo’s posts, and the discussion in the comments, that I can work with it to explain some of what I think is wrong with how he construes bad faith in these posts. If I write something like that up, I will post it on LW.
I don’t know if the EA community in large part disagrees with the OP for the same reasons I do. I think based off some of the material I have been provided with in the comments here, I have more to work with to find the cruxes of disagreement I have with how some people are thinking, whether critically or not, about the EA and rationality communities.
I’ll take a look at these links. Thanks.
I understand the “Vassar Crowd” to be a group of Michael Vassar’s friends who:
were highly critical of EA.
were critical of somewhat less so of the rationality community.
were partly at odds with the bulk of the rationality community in not being as hostile to EA as they thought they should have been.
Maybe you meet those qualifications, but as I understand it the “Vassar Crowd” started publishing blog posts on LessWrong and their own personal blogs, as well as on social media, over the course of a few months starting in the latter half of 2016. It was part of a semi-coordinated effort. While I wouldn’t posit a conspiracy, it seems like a lot of these criticisms of EA were developed in conversations within this group, and, given the name of the group, I assume different people were primarily nudged by Vassar. This also precipitated of Alyssa Vance’s Long-Term World Improvement mailing list.
It doesn’t seem to have continued as a crowd to the present, as the lives of the people involved have obviously changed a lot, and it doesn’t appear from the outside it is as cohesive anymore, I assume in large part because of Vassar’s decreased participation in the community. Ben seems to be one of the only people who is sustaining the effort to criticize EA as the others were before.
So while I appreciate the disclosure, I don’t know if in my previous comment was precise enough, as far as I understand it was that the Vassar Crowd was more a limited clique that was manifested much more in the past than present.
Yes, this is true, and also implies that the rationality community should be replaced with something very different, according to its stated goals. (Did you think I didn’t think that?)
Geeks, Mops, Sociopaths happened to the rationality community, not just EA.
I don’t think it’s unique! I think it’s extremely, extremely common for things to become aesthetic identity movements! This makes the phenomenon matter more, not less!
I have about as many beefs with the rationality movement as I do with the EA movement. I am commenting on this post because Ben already wrote it and I had things to add.
It’s possible that I should feel more moral pressure than I currently do to actively (not just, as a comment on other people’s posts) say what’s wrong about the current state of the rationality community publicly. I’ve already been saying things privately. (This is an invitation to try morally pressuring me, using arguments, if you think it would actually be good for me to do this)
Thanks for acknowledging my point about the rationality community. However, I was trying to get across more generally that I think the ‘aesthetic identity movement’ model might be lacking. If a theory makes the same predictions everywhere, it’s useless. I feel like the ‘aesthetic identity movement’ model might be one of those theories that is too general and not specific enough for me to understand what I’m supposed to take away from its use. For example:
Maybe if all kinds of things are aesthetic identity movements instead of being what htey actually say they are, I wouldn’t be as confused, if I knew what I am supposed to do with this information.
An aesthetic identity movement is one where everything is dominated by how things look on the surface, not what they actually do/mean in material reality. Performances of people having identities, not actions of people in reality. To some extent this is a spectrum, but I think there are attractor states of high/low performativity.
It’s possible for a state not to be an aesthetic identity movement, e.g. by having rule of law, actual infrastructure, etc.
It’s possible for a movement not to be an aesthetic identity movement, by actually doing the thing, choosing actions based on expected value rather than aesthetics alone, having infrastructure that isn’t just doing signalling, etc.
Academic fields have aesthetic elements, but also (some of the time) do actual investigation of reality (or, of reasoning/logic, etc) that turns up unexpected information.
Mass movements are more likely to be aesthetic identity movements than obscure ones. Movements around gaining resources through signalling are more likely to be aesthetic identity movements than ones around accomplishing objectives in material reality. (Homesteading in the US is an example of a historical movement around material reality)
(Note, EA isn’t only as aesthetic identity movement, but it is largely one, in terms of percentage of people, attention, etc; this is an important distinction)
It seems like the concept of “aesthetic identity movement” I’m using hasn’t been communicated to you well; if you want to see where I’m coming from more in more detail, read the following.
Geeks, MOPs, and sociopaths
Identity and its Discontents
Naming the Nameless
On Drama
Optimizing for Stories (vs. Optimizing Reality)
Excerpts from a larger discussion about simulacra
(no need to read all of these if it doesn’t seem interesting, of course)
I will take a look at them. Thanks.
I don’t think you didn’t think that. My question was to challenge you to answer why you, and the others if you would feel comfortable speaking to their perspectives, focus so much of your attention on EA instead of the rationality community (or other communities perhaps presenting the same kind and degree of problems), if you indeed understand they share similar problems, and posing similarly high stakes (e.g., failure modes of x-risk reduction).
I asked because it’s frustrating to me how inconsistent with your own efforts here to put way more pressure on EA than rationality. I’m guessing part of the reason for your trepidation in the rationality community is because you feel a sense of how much disruption it could cause, and how much risk nothing would change either. The same thing has happened when, not so much you, but some of your friends have criticized EA in the past. I was thinking it was because you are socially closer to the rationality community that you wouldn’t be as willing to criticize them.
I am not as invested in the rationality as a community as I was in the past. So, while I feel some personal responsibility to seek to analyze the intellectual failure modes of rationality, I don’t feel much of a moral urge anymore for correcting its social failure modes. So, I lack motivation to think through if it would be “good” or not for you to do it, though.
I think I actually do much more criticism of the rationality community than the EA community nowadays, although that might be invisible to you since most of it is private. (Anyway, I don’t do that much public criticism of EA either, so this seems like a strange complaint about me regardless)
Well, this was a question more about your past activity than the present activity, and also the greater activity of the same kind of some people you seem to know well, but I thought I would take the opportunity to ask you about it now. At any rate, thanks for taking the time to humour me.
It doesn’t seem to me like anyone I interact with is still honestly confused about whether and to what extent e.g. CFAR can teach rationality, or rationality provides the promised superpowers. Whereas some people still believe a few core EA claims (like the one the OP criticizes) which I think are pretty implausible if you just look at them in conjunction and ask yourself what else would have to be true.
If you or anyone else want to motivate me to criticize the Rationality movement more, pointing me at people who continue to labor under the impression that the initial promises were achievable is likely to work; rude and condescending “advice” about how the generic reader (but not any particular person) is likely to feel the wrong way about my posts on EA is not likely to work.
So, I agree with the claim that EA has a lot of aesthetic-identity-elements going on that compound (and in many cases cause) the problem. I think that’s really important to acknowledge (although it’s not obvious that the solution needs to include starting over)
But I also think, in the case of this particular post, though, that the answer is simpler. The OP says:
Which… sure uses language that sounds like it’s an attack on Givewell to me. seems:
[edit] The above paragraph
a) dishonest and/or false, in that it claims Givewell publishes such cost-per-life numbers, but at the moment AFAICT Givewell goes to great lengths to hide those numbers (i.e. to find the numbers of AMF you get redirected to a post about how to think about the numbers which links to a spreadsheet, which seems like the right procedure to me for forcing people to actually think a bit about the numbers)
b) uses phrases like “hoarding” and “wildly exaggerated” that I generally associate with coalition politics rather than denotive-language-that-isn’t-trying-to-be-enacting, while criticizing others for coalition politics, which seems a) like bad form, b) not like a process that I expect to result in something better-than-EA at avoiding pathologies that stem from coalition politics.
[double edit] to be clear, I do think it’s fair to criticize CEA and/or the EA community collectively for nonetheless taking the numbers as straightforward. And I think their approach to OpenAI deserves, at the very least, some serious scrutiny. (Although I think Ben’s claims about how off they are are overstated. This critique by Kelsey seems pretty straightforwardly true to me. AFAICT in this post Ben has made a technical error approximately of the same order of magnitude of what he’s claiming others are making)
My comment was a response to Evan’s, in which he said people are reacting emotionally based on identity. Evan was not explaining people’s response by referring to actual flaws in Ben’s argumentation, so your explanation is distinct from Evan’s.
a) GiveWell does publish cost-effectiveness estimates. I found them in a few clicks. So Ben’s claim is neither dishonest nor false.
b) So, the fact that you associate these phrases with coalitional politics, means Ben is attacking GiveWell? What? These phrases have denotative meanings! They’re pretty clear to determine if you aren’t willfully misinterpreting them! The fact that things that have clear denotative meanings get interpreted as attacking people is at the core of the problem!
To say that Ben creating clarity about what GiveWell is doing is an attack on GiveWell, is to attribute bad motives to GiveWell. It says that GiveWell wants to maintain a positive impression of itself, regardless of the facts, i.e. to defraud nearly everyone. (If GiveWell wants correct information about charities and charity evaluations to be available, then Ben is acting in accordance with their interests [edit: assuming what he’s saying is true], i.e. the opposite of attacking them).
Perhaps you endorse attributing bad motives to GiveWell, but in that case it would be hypocritical to criticize Ben for doing things that could be construed as doing that.
While I agree that this is a sufficient rebuttal of Ray’s “dishonest and/or false” charge (Ben said that GiveWell publishes such numbers, and GiveWell does, in fact, publish such numbers), it seems worth acknowleding Ray’s point about context and reduced visibility: it’s not misleading to publish potentially-untrustworthy (but arguably better than nothing) numbers surrounded by appropriate caveats and qualifiers, even when it would be misleading to loudly trumpet the numbers as if they were fully trustworthy.
That said, however, Ray’s “GiveWell goes to great lengths to hide those numbers” claim seems false to me in light of an email I received from GiveWell today (the occasion of my posting this belated comment), which reads, in part:
(Bolding mine.)
Further update on this. Givewell has since posted this blogpost. I haven’t yet reviewed this enough have a strong opinion on it, but I think it at least explains some of the difference in epistemic state I had at the time of this discussion.
Relevant bit:
A friend also recently mentioned getting this email to me, and yes, this does significantly change my outlook here.
I wonder if it would help to play around with emotive conjugation? Write up the same denotative criticism twice, once using “aggressive” connotations (“hoarding”, “wildly exaggerated”) and again using “softer” words (“accumulating”, “significantly overestimated”), with a postscript that says, “Look, I don’t care which of these frames you pick; I’m trying to communicate the literal claims common to both frames.”
When he wrote:
In most contexts when language liked this is used, it’s usually pretty clear that you are implying someone is doing something closer to deliberately lying than some softer kind of deception. I am aware Ben might have some model about how Givewell or others in EA are acting in bad faith in some other manner, involving self-deception. If that is what he is implying that Givewell or Good Ventures are doing instead of deliberately lying, that isn’t clear from the OP. He could have also stated the organizations in question are not fully aware they’re just marketing obvious nonsense, and had been immune to his attempts to point this out to them. If that is the case, but he didn’t state that in the OP either.
So, based on their prior experience, I believe it would appear to many people like he was implying Givewell, Good Ventures, and EA are deliberately lying. Deliberate lying is generally seen as a bad thing. So, to imply someone is deliberately lying seems to clearly be an attribution of bad motives to others. So if Ben didn’t expect or think that is how people would construe part of what he was trying to say, I don’t know what he was going for.
I think the current format isn’t good venue for me to continue the current discussion. For now, roughly, I disagree with the framing in your most recent comment, and stand by my previous comment.
I’ll try to write up a top level post that outlines more of my thinking here. I’d have some interest in a private discussion that gets turned into a google doc that gets turned into a post, or possibly some other format. I think public discussion threads are a uniquely bad format for this sort of thing.