I downvoted because this feels overly smug to me. I think it’s a legitimate issue, but GiveWell has made many arguments for why they do what they do, and OpenPhil has made some progress on figuring out how to evaluate AI organizations. Sure, many fields might very well be vastly more fruitful, but they also might not. How do we know which ones?
Can you say more about the perceived smugness? It seems to me like a straightforward account of the obvious limitation to GiveWell’s scope. I only didn’t upvote because it seemed too obvious.
To me, the tone came across as “Ho ho ho, look at those stupid GiveWell people who have never heard of the streetlight effect! They’re blinded by their own metrics and can’t even see how awesome MIRI is!” when there’s no interaction or acknowledgement with (a) materials from GiveWell that address the streetlight effect argument, (b) OpenPhil, or (c) how to actually start to resolve the problem (or even that the problem is particularly hard).
I don’t want to have a high demand for rigor, especially for Discussion-type posts—for me, it’s more about the lack of humility.
Technological research, especially AI, or perhaps effective educational reform, or improving the scientific community’s norms might very well be vastly more fruitful fields. … by GiveWell’s metrics, they can’t be measured at all!
It is not obvious from the article whether the author even checked that actually no charity evaluated by GiveWell was of this type. For all we know, the author checked they didn’t get into the top 7 list. But there is a possibility that GiveWell actually gave some consideration to them, only to conclude that none of those charities belong to the top list.
That is, the articles feels as if the author automatically concluded that GiveWell is stupid, and didn’t even bother to verify his assumption, only used the top 7 list as an evidence. To convince me otherwise, it would help to quote some text from GiveWell website confirming this. Because I think it is likely that GiveWell considered this topic explicitly, and published their conclusion, whatever it was.
I downvoted because this feels overly smug to me. I think it’s a legitimate issue, but GiveWell has made many arguments for why they do what they do, and OpenPhil has made some progress on figuring out how to evaluate AI organizations. Sure, many fields might very well be vastly more fruitful, but they also might not. How do we know which ones?
Can you say more about the perceived smugness? It seems to me like a straightforward account of the obvious limitation to GiveWell’s scope. I only didn’t upvote because it seemed too obvious.
To me, the tone came across as “Ho ho ho, look at those stupid GiveWell people who have never heard of the streetlight effect! They’re blinded by their own metrics and can’t even see how awesome MIRI is!” when there’s no interaction or acknowledgement with (a) materials from GiveWell that address the streetlight effect argument, (b) OpenPhil, or (c) how to actually start to resolve the problem (or even that the problem is particularly hard).
I don’t want to have a high demand for rigor, especially for Discussion-type posts—for me, it’s more about the lack of humility.
For example, this part:
It is not obvious from the article whether the author even checked that actually no charity evaluated by GiveWell was of this type. For all we know, the author checked they didn’t get into the top 7 list. But there is a possibility that GiveWell actually gave some consideration to them, only to conclude that none of those charities belong to the top list.
That is, the articles feels as if the author automatically concluded that GiveWell is stupid, and didn’t even bother to verify his assumption, only used the top 7 list as an evidence. To convince me otherwise, it would help to quote some text from GiveWell website confirming this. Because I think it is likely that GiveWell considered this topic explicitly, and published their conclusion, whatever it was.