Here’s what I think is true and important about this post: some people will try to explicitly estimate expected values in ways that don’t track the real expected values, and when they do this, they’ll make bad decisions. We should avoid these mistakes, which may be easy to fall into, and we can avoid some of them by using regressions of the kind described above in the case of charity cost-effectiveness estimates. As Toby points out, this is common ground between GiveWell and GWWC. Let me list a what I take to be a few points of disagreement.
I think that after making an appropriate attempt to gather evidence, the result of doing the best expected value calculation that you can is by far the most important input into a large scale philanthropic decision. We should think about the result of the calculation makes sense, we should worry if it is wildly counterintuitive, and we should try hard to avoid mistakes. But the result of this calculation will matter more than most kinds of informal reasoning, especially if the differences in expected value are great. I think this will be true for people who are competent with thinking in terms of subjective probabilities and expected values, which will rule out a lot of people, but will include a lot of the people who would consider whether to make important philanthropic decisions on the basic of expected value calculations.
I think this argument unfairly tangles up making decisions explicitly on the basis of expected value calculations with Pascal’s Mugging. It’s not too hard to choose a bounded utility function that doesn’t tell you to pay the mugger, and there are independent (though not clearly decisive) reasons to use a bounded utility function for decision-making, even when the probabilities are stable. Since the unbounded utility function assumption can shoulder the blame, the invocation of Pascal’s Mugging doesn’t seem all that telling. (Also, for reasons Wei Dai gestures at I don’t accept Holden’s conjecture that making regression adjustments will get us out of the Pascal’s Mugging problem, even if we have unbounded utility functions.)
Though I agree that intuition can be a valuable tool when trying to sanity check an expected value calculation, I am hesitant to rely too heavily on it. Things like scope insensitivity and ambiguity aversion could easily make me unreasonably queasy about relying a perfectly reasonable expected value calculation.
Finally, I classify several of the arguments in this post as “perfect world” arguments because they involve thinking a lot about what would happen if everyone behaved in a certain kind of way. I don’t want to rest too much weight on these arguments because my behavior doesn’t causally or acausally affect the way enough people would behave in order for these arguments to be directly relevant to my decisions. Even if I accepted perfect world arguments, some of these arguments appear not to work. For example, if all donors were rational altruists, and that was common knowledge, then charities that were effective would have a strong incentive to provide evidence of their effectiveness. If some charity refused to share information, that would be very strong evidence that the charity was not effective. So it doesn’t seem to be true, as Holden claims, that if everyone was totally reliant on explicit expected value calculations, we’d all give to charities about which we have very little information. (Deciding not to be totally transparent is not such good evidence now, since donors are far from being rational altruists.)
Though I have expressed mostly disagreement, I think Holden’s post is very good and I’m glad that he made it.
While I have sympathy with the complaint that SI’s critics are inarticulate and often say wrong things, Eliezer’s comment does seem to be indicative of the mistake Holden and Wei Dai are describing. Most extant presentations of SIAI’s views leave much to be desired in terms of clarity, completeness, concision, accessibility, and credibility signals. This makes it harder to make high quality objections. I think it would be more appropriate to react to poor critical engagement more along the lines of “We haven’t gotten great critics. That probably means that we need to work on our arguments and their presentation,” and less along the lines of “We haven’t gotten great critics. That probably means that there’s something wrong with the rest of the world.”