I think getting things up is the right priority! (And I’m glad you are doing this and don’t mean to discourage you at all, though I was annoyed by this post.)
Sorry I accused you of closing comments. Trying to block spam comments is completely understandable, though the fact that the comments section reads “Comments are closed” is hopefully understandably confusing :) Edit: actually I just logged in, and still can’t comment.
I agree that there is room for quantitative analysis, and I agree that you are better positioned to provide that than GiveWell (I made a brief concession to this at the start, which perhaps should have been longer but for my unjustified ill humor). I agree that GiveWell lacks staff with relevant skills, but I think the evidence you cite is weak (mostly errors of omission in an expository blog post) and you overstate the case.
I think that in cases like the microfinance meta-analysis, where there are in fact big confounds, GiveWell’s take is more reliable than most meta-analyses (and without seeing your meta-analysis, I would default to trusting GiveWell over you in this case). I think disparaging their approach as vote counting is misleading and places too much confidence in the methodology of your metaanalyses. I’m prepared to be surprised, but the empirical track record of meta-analyses is simply not that good.
Yes, the comment about “choice paralysis” was a response to Raemon. I forget that in a different context that may look like misattribution, sorry about that.
GiveWell makes recommendations. It seems like at the end of the day, people need to donate, and GiveWell’s judgment about how to weigh intermediates is better than the average donor’s. So it seems like they are making the right call there (it’s not coincidental this is what donors wanted, since they understand that GiveWell’s judgment on that question is better than their own).
GiveWell also discusses the various intermediates that are being weighed against each other, and their reasoning with respect to those intermediates. I do not think their discussion is great nor their positions solidly justified, and I disagree with them on many points. But I don’t see anyone else anywhere who is trying to have that discussion, and it seems like GiveWell is actively encouraging rather than discouraging it (to wit, the community around GiveWell seems to be one of the few places to find serious, reasonable discussion about this issue).
I basically stand by my criticisms, though I do apologize about my tone. I considered editing my original message but think it’s better to let it stand. I’ll make a more respectable comment at the original. I think the world is probably better for AidGrade’s existence, I agree there are gaps in GiveWell’s coverage that can be filled (and core services that could be profitably replicated) and I hope that both groups can do better than they would in isolation. I’ll be more civil going forward—cheers!
First, thanks to paulfchristiano for the moderation. I’m also trying to be moderate, but it’s sometimes hard to gauge one’s own tone on the internet.
Now, apologies for replying to numerous points from different people in one post, but I would feel strange posting all over the place here and this is probably my last post. It would be helpful if people have more questions to send them directly and I can try to address them on the blog as multitasking (as well as so that more people can benefit from the answers, since as good as Less Wrong is, I doubt it would be the most appropriate long-term home for the main questions people have about AidGrade): http://www.aidgrade.org/blog.
Re: ygert’s “I don’t care about how many people are dying of malaria. I just don’t. What I do care about is people dying, or suffering, of anything”: We’re trying to build up to this, but not there yet. Hang on, please. GiveWell in 2013 is also much better than GiveWell 1.0 was.
Just to quickly add: I’ve also separately been informed GiveWell’s rationale for simplifying was because donors themselves seemed to focus on global health, with reference to section 2 of http://blog.givewell.org/2011/02/04/givewells-annual-self-evaluation-and-plan-a-big-picture-change-in-priorities/. My gut says that if they had picked a different organization as their #1 rated organization, they would see less emphasis on global health, but I can understand wanting to focus on what their main donors supported. It’s a fair point—if QALYs are what people want, that’s what people want. But do people really put no weight on education, etc.? If you think to the big philosophers, you don’t think of Nussbaum or Singer or whoever else saying okay, QALYs are all that matter. I’m not saying who’s right here, but I do think there’s a greater diversity of opinion than is being reflected here; the popularity of QALYs might in part be due to the fact we have a measure for it (as opposed to e.g. something that aggregates education (EALYs?) or aggregates across all fields or is harder to measure).
Re: meta-analysis—first, a meta-analysis tool should in principle be weakly better than (at least as good as) looking at any one study. (See: http://www.aidgrade.org/faq/how-will-you-provide-information-on-context.) An advantage of gathering all these data and coding up different characteristics of studies is that it allows easier filtering of studies later on to allow people to look at results in different settings. If results widely vary by setting, you can see that, too. Second, all the things that go into a literature review of a topic also go into a meta-analysis, which is more like a superset. So if you don’t think a paper was particularly good for whatever reason you can flag that and exclude it from the meta-analysis. We have some quality measures, not that you can tell that from what’s currently online unfortunately.
My overall impression is that since GiveWell has quite rightly been supported by pretty much everyone who cares about aid and data, it’s particularly hard to say anything that’s different. Hardly anyone has any tribal affiliations to AidGrade yet, relatively speaking, there’s the unknown, etc. But while I feel the concern (and excitement) here has been from people considering AidGrade as a competitor, I would like to point out each stands to benefit from the other as well. (Actually, now I see that paulfchristiano makes that point as well.)
And on that note, I’ll try to bow out / carry on the conversation elsewhere.
Apologies to everyone involved for the “Choice paralysis” line. It was (I thought a bit more obviously), an exaggeration. To be clear: I myself rely on GiveWell, not to do identify the best charity for me, but to establish a lower bound on what “the most effective charity” might be, which I can compare to my best efforts at reviewing more-difficult-to-evaluate-but-probably-higher-impact charities (like, say, CFAR). And this is neither “choice paralysis” nor “not having any idea what to do.” I’ll change the OP to be less flippant.
I want to express my thanks to Eva for starting this project, and wish her luck and will getting more research done and the website updated with more content in the days/weeks/months to come.
I think getting things up is the right priority! (And I’m glad you are doing this and don’t mean to discourage you at all, though I was annoyed by this post.)
Sorry I accused you of closing comments. Trying to block spam comments is completely understandable, though the fact that the comments section reads “Comments are closed” is hopefully understandably confusing :) Edit: actually I just logged in, and still can’t comment.
I agree that there is room for quantitative analysis, and I agree that you are better positioned to provide that than GiveWell (I made a brief concession to this at the start, which perhaps should have been longer but for my unjustified ill humor). I agree that GiveWell lacks staff with relevant skills, but I think the evidence you cite is weak (mostly errors of omission in an expository blog post) and you overstate the case.
I think that in cases like the microfinance meta-analysis, where there are in fact big confounds, GiveWell’s take is more reliable than most meta-analyses (and without seeing your meta-analysis, I would default to trusting GiveWell over you in this case). I think disparaging their approach as vote counting is misleading and places too much confidence in the methodology of your metaanalyses. I’m prepared to be surprised, but the empirical track record of meta-analyses is simply not that good.
Yes, the comment about “choice paralysis” was a response to Raemon. I forget that in a different context that may look like misattribution, sorry about that.
GiveWell makes recommendations. It seems like at the end of the day, people need to donate, and GiveWell’s judgment about how to weigh intermediates is better than the average donor’s. So it seems like they are making the right call there (it’s not coincidental this is what donors wanted, since they understand that GiveWell’s judgment on that question is better than their own).
GiveWell also discusses the various intermediates that are being weighed against each other, and their reasoning with respect to those intermediates. I do not think their discussion is great nor their positions solidly justified, and I disagree with them on many points. But I don’t see anyone else anywhere who is trying to have that discussion, and it seems like GiveWell is actively encouraging rather than discouraging it (to wit, the community around GiveWell seems to be one of the few places to find serious, reasonable discussion about this issue).
I basically stand by my criticisms, though I do apologize about my tone. I considered editing my original message but think it’s better to let it stand. I’ll make a more respectable comment at the original. I think the world is probably better for AidGrade’s existence, I agree there are gaps in GiveWell’s coverage that can be filled (and core services that could be profitably replicated) and I hope that both groups can do better than they would in isolation. I’ll be more civil going forward—cheers!
First, thanks to paulfchristiano for the moderation. I’m also trying to be moderate, but it’s sometimes hard to gauge one’s own tone on the internet.
Now, apologies for replying to numerous points from different people in one post, but I would feel strange posting all over the place here and this is probably my last post. It would be helpful if people have more questions to send them directly and I can try to address them on the blog as multitasking (as well as so that more people can benefit from the answers, since as good as Less Wrong is, I doubt it would be the most appropriate long-term home for the main questions people have about AidGrade): http://www.aidgrade.org/blog.
Re: ygert’s “I don’t care about how many people are dying of malaria. I just don’t. What I do care about is people dying, or suffering, of anything”: We’re trying to build up to this, but not there yet. Hang on, please. GiveWell in 2013 is also much better than GiveWell 1.0 was.
Just to quickly add: I’ve also separately been informed GiveWell’s rationale for simplifying was because donors themselves seemed to focus on global health, with reference to section 2 of http://blog.givewell.org/2011/02/04/givewells-annual-self-evaluation-and-plan-a-big-picture-change-in-priorities/. My gut says that if they had picked a different organization as their #1 rated organization, they would see less emphasis on global health, but I can understand wanting to focus on what their main donors supported. It’s a fair point—if QALYs are what people want, that’s what people want. But do people really put no weight on education, etc.? If you think to the big philosophers, you don’t think of Nussbaum or Singer or whoever else saying okay, QALYs are all that matter. I’m not saying who’s right here, but I do think there’s a greater diversity of opinion than is being reflected here; the popularity of QALYs might in part be due to the fact we have a measure for it (as opposed to e.g. something that aggregates education (EALYs?) or aggregates across all fields or is harder to measure).
Re: meta-analysis—first, a meta-analysis tool should in principle be weakly better than (at least as good as) looking at any one study. (See: http://www.aidgrade.org/faq/how-will-you-provide-information-on-context.) An advantage of gathering all these data and coding up different characteristics of studies is that it allows easier filtering of studies later on to allow people to look at results in different settings. If results widely vary by setting, you can see that, too. Second, all the things that go into a literature review of a topic also go into a meta-analysis, which is more like a superset. So if you don’t think a paper was particularly good for whatever reason you can flag that and exclude it from the meta-analysis. We have some quality measures, not that you can tell that from what’s currently online unfortunately.
My overall impression is that since GiveWell has quite rightly been supported by pretty much everyone who cares about aid and data, it’s particularly hard to say anything that’s different. Hardly anyone has any tribal affiliations to AidGrade yet, relatively speaking, there’s the unknown, etc. But while I feel the concern (and excitement) here has been from people considering AidGrade as a competitor, I would like to point out each stands to benefit from the other as well. (Actually, now I see that paulfchristiano makes that point as well.)
And on that note, I’ll try to bow out / carry on the conversation elsewhere.
Apologies to everyone involved for the “Choice paralysis” line. It was (I thought a bit more obviously), an exaggeration. To be clear: I myself rely on GiveWell, not to do identify the best charity for me, but to establish a lower bound on what “the most effective charity” might be, which I can compare to my best efforts at reviewing more-difficult-to-evaluate-but-probably-higher-impact charities (like, say, CFAR). And this is neither “choice paralysis” nor “not having any idea what to do.” I’ll change the OP to be less flippant.
I want to express my thanks to Eva for starting this project, and wish her luck and will getting more research done and the website updated with more content in the days/weeks/months to come.