I think I can summarize my difficulties with this comment a bit better now.
(1) It’s quite long, and brings up many objections that I dealt with in detail in the longer series I linked to. There will always be more excuses someone can generate that sound facially plausible if you don’t think them through. One has to limit scope somehow, and I’d be happy to get specific constructive suggestions about how to do that more clearly.
(2) You’re exaggerating the extent to which Open Philanthropy Project, Good Ventures, and GiveWell, have been separate organizations. The original explanation of the partial funding decision—which was a decision about how to recommend allocating Good Ventures’s capital—was published under the GiveWell brand, but under Holden’s name. My experience working for the organizations was broadly consistent with this. If they’ve since segmented more, that sounds like an improvement, but doesn’t help enough with the underlying revealed preferences problem.
I’d be happy to get specific constructive suggestions about how to do that more clearly.
I don’t know that this suggestion is best – it’s a legitimately hard problem – but a policy I think would be pretty reasonable is:
When responding to lengthy comments/posts that include at least 1-2 things you know you dealt with in a longer series, one option is to simply leave it at: “hmm, I think it’d make more sense for you to read through this longer series and think carefully about it before continuing the discussion” rather than trying to engage with any specific points.
And then shifting the whole conversation into a slower mode, where people are expected to take a day or two in between replies to make sure they understand all the context.
(I think I would have had similar difficulty responding to Evan’s comment as what you describe here)
To clarify a bit—I’m more confused about how to make the original post more clearly scope-limited, than about how to improve my commenting policy.
Evan’s criticism in large part deals with the facts that there are specific possible scenarios I didn’t discuss, which might make more sense of e.g. GiveWell’s behavior. I think these are mostly not coherent alternatives, just differently incoherent ones that amount to changing the subject.
It’s obviously not possible to discuss every expressible scenario. A fully general excuse like “maybe the Illuminati ordered them to do it as part of a secret plot,” for instance, doesn’t help very much, since that posits an exogenous source of complications that isn’t very strongly constrained by our observations, and doesn’t constrain our future anticipations very well. We always have to allow for the possibility that something very weird is going on, but I think “X or Y” is a reasonable short hand for “very likely, X or Y” in this context.
On the other hand, we can’t exclude scenarios arbitrarily. It would have been unreasonable for me, on the basis of the stated cost-per-life-saved numbers, to suggest that the Gates Foundation is, for no good reason, withholding money that could save millions of lives this year, when there’s a perfectly plausible alternative—that they simply don’t think this amazing opportunity is real. This is especially plausible when GiveWell itself has said that its cost per life saved numbers don’t refer to some specific factual claim.
“Maybe partial funding because AI” occurred to enough people that I felt the need to discuss it in the long series (which addressed all the arguments I’d heard up to that point), but ultimately it amounts to a claim that all the discourse about saving “dozens of lives” per donor is beside the point since there’s a much higher-leverage thing to allocate funds to—in which case, why even engage with the claim in the first place?
Any time someone addresses a specific part of a broader issue, there will be countless such scope limitations, and they can’t all be made explicit in a post of reasonable length.
I think I can summarize my difficulties with this comment a bit better now.
(1) It’s quite long, and brings up many objections that I dealt with in detail in the longer series I linked to. There will always be more excuses someone can generate that sound facially plausible if you don’t think them through. One has to limit scope somehow, and I’d be happy to get specific constructive suggestions about how to do that more clearly.
(2) You’re exaggerating the extent to which Open Philanthropy Project, Good Ventures, and GiveWell, have been separate organizations. The original explanation of the partial funding decision—which was a decision about how to recommend allocating Good Ventures’s capital—was published under the GiveWell brand, but under Holden’s name. My experience working for the organizations was broadly consistent with this. If they’ve since segmented more, that sounds like an improvement, but doesn’t help enough with the underlying revealed preferences problem.
I don’t know that this suggestion is best – it’s a legitimately hard problem – but a policy I think would be pretty reasonable is:
When responding to lengthy comments/posts that include at least 1-2 things you know you dealt with in a longer series, one option is to simply leave it at: “hmm, I think it’d make more sense for you to read through this longer series and think carefully about it before continuing the discussion” rather than trying to engage with any specific points.
And then shifting the whole conversation into a slower mode, where people are expected to take a day or two in between replies to make sure they understand all the context.
(I think I would have had similar difficulty responding to Evan’s comment as what you describe here)
To clarify a bit—I’m more confused about how to make the original post more clearly scope-limited, than about how to improve my commenting policy.
Evan’s criticism in large part deals with the facts that there are specific possible scenarios I didn’t discuss, which might make more sense of e.g. GiveWell’s behavior. I think these are mostly not coherent alternatives, just differently incoherent ones that amount to changing the subject.
It’s obviously not possible to discuss every expressible scenario. A fully general excuse like “maybe the Illuminati ordered them to do it as part of a secret plot,” for instance, doesn’t help very much, since that posits an exogenous source of complications that isn’t very strongly constrained by our observations, and doesn’t constrain our future anticipations very well. We always have to allow for the possibility that something very weird is going on, but I think “X or Y” is a reasonable short hand for “very likely, X or Y” in this context.
On the other hand, we can’t exclude scenarios arbitrarily. It would have been unreasonable for me, on the basis of the stated cost-per-life-saved numbers, to suggest that the Gates Foundation is, for no good reason, withholding money that could save millions of lives this year, when there’s a perfectly plausible alternative—that they simply don’t think this amazing opportunity is real. This is especially plausible when GiveWell itself has said that its cost per life saved numbers don’t refer to some specific factual claim.
“Maybe partial funding because AI” occurred to enough people that I felt the need to discuss it in the long series (which addressed all the arguments I’d heard up to that point), but ultimately it amounts to a claim that all the discourse about saving “dozens of lives” per donor is beside the point since there’s a much higher-leverage thing to allocate funds to—in which case, why even engage with the claim in the first place?
Any time someone addresses a specific part of a broader issue, there will be countless such scope limitations, and they can’t all be made explicit in a post of reasonable length.