Good post, Jonah. You say that: “effective altruists should spend much more time on qualitative analysis than on quantitative analysis in determining how they can maximize their positive social impact”. What do you mean by “qualitative analysis”? As I understand it, your points are: i) The amount by which you should regress to your prior is much greater than you had previously thought, so ii) you should favour robustness of evidence more than you had previously. But that doesn’t favour qualitative vs non-qualitative evidence. It favours more robust evidence of lower but good cost-effectiveness over less robust evidence of higher cost-effectiveness. The nature of the evidence could be either qualitative or quantitative, and the things you mention in “implications” are generally quantitative.
In terms of “good done per dollar”—for me that figure is still far greater than I began with (and I take it that that’s the question that EAs are concerned with, rather than “lives saved per dollar”). This is because, in my initial analysis—and in what I’d presume are most people’s initial analyses—benefits to the long-term future weren’t taken into account, or weren’t thought to be morally relevant. But those (expected) benefits strike me, and strike most people I’ve spoken with who agree with the moral relevance of them, to be far greater than the short-term benefits to the person whose life is saved. So, in terms of my expectations about how much good I can do in the world, I’m able to exceed those by a far greater amount than I’d previously thought likely. And that holds true whether it costs $2000 or $20000 to save a life. I’m not mentioning that either to criticise or support your post, but just to highlight that the lesson to take from past updates on evidence can look quite different depending on whether you’re talking about “good done per dollar” or “lives saved per dollar”, and the former is what we ultimately care about.
Final point: Something you don’t mention is that, when you find out that your evidence is crappier than you’d thought, two general lessons are to pursue things with high option value and to pay to gain new evidence (though I acknowledge that this depends crucially on how much new evidence you think you’ll be able to get). Building a movement of people who are aiming to do the most good with their marginal resources, and who are trying to work out how best to do that, strikes me as a good way to achieve both of these things.
The nature of the evidence could be either qualitative or quantitative, and the things you mention in “implications” are generally quantitative.
Assessing the quality of the people behind a project is qualitative rather than quantitative.
Room for more funding is in principle quantitative, but my experience has been that in practice, room for more funding analysis ends up being more qualitative, as you have to make judgments about things such as who would otherwise have funded the project, which hinge heavily on knowledge of the philanthropic landscape in respects that aren’t easily quantified.
Gauging historical precedent requires many judgment calls, and so can’t be quantified.
Deciding what giving opportunities one can learn the most from can’t be quantified.
In terms of “good done per dollar”—for me that figure is still far greater than I began with (and I take it that that’s the question that EAs are concerned with, rather than “lives saved per dollar”). [...] because, in my initial analysis—and in what I’d presume are most people’s initial analyses—benefits to the long-term future weren’t taken into account, or weren’t thought to be morally relevant.
I explicitly address this in the second paragraph of the “The history of GiveWell’s estimates for lives saved per dollar” section of my post as well as the “Donating to AMF has benefits beyond saving lives” section of my post.
Building a movement of people who are aiming to do the most good with their marginal resources, and who are trying to work out how best to do that, strikes me as a good way to achieve both of these things.
I agree with this. I don’t think that my post suggests otherwise.
I explicitly address this in the second paragraph of the “The history of GiveWell’s estimates for lives saved per dollar” section of my post as well as the “Donating to AMF has benefits beyond saving lives” section of my post.
Not really. You do mention the flow-on benefits. But you don’t analyse whether your estimate of “good done per dollar” has increased or decreased. And that’s the relevant thing to analyse. If you argued “cost per life saved has had greater regression to your prior than you’d expected; and for that reason I expect my estimates of good done per dollar to regress really substantially” (an argument I think you would endorse), I’d accept that argument, though I’d worry about how much it generalises to cause-areas other than global poverty. (e.g. I expect there to be much less of an ‘efficient market’ for activities where there are fewer agents with the same goals/values, like benefiting non-human animals, or making sure the far-future turn out well). Optimism bias still holds, of course.
You say that “cost-effectiveness estimates skew so negatively.” I was just pointing out that for me that hasn’t been the case (for good done per $), because long-run benefits strike me as swamping short-term benefits, a factor that I didn’t initially incorporate into my model of doing good. And, though I agree with the conclusion that you want as many different angles as possible (etc), focusing on cost per life saved rather than good done per dollar might lead you to miss important lessons (e.g. “make sure that you’ve identified all crucial normative and empirical considerations”). I doubt that you personally have missed those lessons. But they aren’t in your post. And that’s fine, of course, you can’t cover everything in one blog post. But it’s important for the reader not to overgeneralise.
I agree with this. I don’t think that my post suggests otherwise.
I think your points about the limits of quantitative analysis are a good reality check, but I’m not sure I understand the argument for the types of assessment you suggest here.
Why should I (someone who’s not remarkably experienced in charity evaluation) expect my intuition about e.g. “historical precedent” to be more valid than the data GiveWell collects?
Good post, Jonah. You say that: “effective altruists should spend much more time on qualitative analysis than on quantitative analysis in determining how they can maximize their positive social impact”. What do you mean by “qualitative analysis”? As I understand it, your points are: i) The amount by which you should regress to your prior is much greater than you had previously thought, so ii) you should favour robustness of evidence more than you had previously. But that doesn’t favour qualitative vs non-qualitative evidence. It favours more robust evidence of lower but good cost-effectiveness over less robust evidence of higher cost-effectiveness. The nature of the evidence could be either qualitative or quantitative, and the things you mention in “implications” are generally quantitative.
In terms of “good done per dollar”—for me that figure is still far greater than I began with (and I take it that that’s the question that EAs are concerned with, rather than “lives saved per dollar”). This is because, in my initial analysis—and in what I’d presume are most people’s initial analyses—benefits to the long-term future weren’t taken into account, or weren’t thought to be morally relevant. But those (expected) benefits strike me, and strike most people I’ve spoken with who agree with the moral relevance of them, to be far greater than the short-term benefits to the person whose life is saved. So, in terms of my expectations about how much good I can do in the world, I’m able to exceed those by a far greater amount than I’d previously thought likely. And that holds true whether it costs $2000 or $20000 to save a life. I’m not mentioning that either to criticise or support your post, but just to highlight that the lesson to take from past updates on evidence can look quite different depending on whether you’re talking about “good done per dollar” or “lives saved per dollar”, and the former is what we ultimately care about.
Final point: Something you don’t mention is that, when you find out that your evidence is crappier than you’d thought, two general lessons are to pursue things with high option value and to pay to gain new evidence (though I acknowledge that this depends crucially on how much new evidence you think you’ll be able to get). Building a movement of people who are aiming to do the most good with their marginal resources, and who are trying to work out how best to do that, strikes me as a good way to achieve both of these things.
Assessing the quality of the people behind a project is qualitative rather than quantitative.
Room for more funding is in principle quantitative, but my experience has been that in practice, room for more funding analysis ends up being more qualitative, as you have to make judgments about things such as who would otherwise have funded the project, which hinge heavily on knowledge of the philanthropic landscape in respects that aren’t easily quantified.
Gauging historical precedent requires many judgment calls, and so can’t be quantified.
Deciding what giving opportunities one can learn the most from can’t be quantified.
I explicitly address this in the second paragraph of the “The history of GiveWell’s estimates for lives saved per dollar” section of my post as well as the “Donating to AMF has benefits beyond saving lives” section of my post.
I agree with this. I don’t think that my post suggests otherwise.
Not really. You do mention the flow-on benefits. But you don’t analyse whether your estimate of “good done per dollar” has increased or decreased. And that’s the relevant thing to analyse. If you argued “cost per life saved has had greater regression to your prior than you’d expected; and for that reason I expect my estimates of good done per dollar to regress really substantially” (an argument I think you would endorse), I’d accept that argument, though I’d worry about how much it generalises to cause-areas other than global poverty. (e.g. I expect there to be much less of an ‘efficient market’ for activities where there are fewer agents with the same goals/values, like benefiting non-human animals, or making sure the far-future turn out well). Optimism bias still holds, of course.
You say that “cost-effectiveness estimates skew so negatively.” I was just pointing out that for me that hasn’t been the case (for good done per $), because long-run benefits strike me as swamping short-term benefits, a factor that I didn’t initially incorporate into my model of doing good. And, though I agree with the conclusion that you want as many different angles as possible (etc), focusing on cost per life saved rather than good done per dollar might lead you to miss important lessons (e.g. “make sure that you’ve identified all crucial normative and empirical considerations”). I doubt that you personally have missed those lessons. But they aren’t in your post. And that’s fine, of course, you can’t cover everything in one blog post. But it’s important for the reader not to overgeneralise.
I wasn’t suggesting it does.
Ok. Do you have any suggestions for how I could modify my post to make it more clear in these respects?
I think your points about the limits of quantitative analysis are a good reality check, but I’m not sure I understand the argument for the types of assessment you suggest here.
Why should I (someone who’s not remarkably experienced in charity evaluation) expect my intuition about e.g. “historical precedent” to be more valid than the data GiveWell collects?