Your link does not support that “fact”. The page speaks about estimations.
In their own FAQ GiveWell says:
“However, estimating the cost to save a life involves simplification, guesswork, and subjective judgment. As a result, our estimates are very uncertain. We analyze the cost-effectiveness of programs primarily because doing so helps us see large, clear differences in good accomplished per dollar spent and because working on the models helps us ensure that we are thinking through as many of the relevant issues as possible. For more on how we use cost-effectiveness estimates in our grantmaking, see this page.”
Your argument is that the subjective judgement of people, who you like and who themselves say they are very uncertain, is a striking fact.
This kind of epistemic laundering is either dishonest or ignorant.
Sam Bankman Fried was a billionaire inspired by the ideas of effective altruism, who ran a crypto firm.
Sam Bankman-Fried wasn’t just “inspired by the ideas of effective altruism”. He was one of the main case studies of 80,000 Hours who supposedly coached him into entering finance which he wouldn’t have without the help from 80,000 Hours. That was true even before he had a lot of money to donate.
What EAs look for, before endorsing systemic reform, is genuine evidence of effectiveness. For this reason, EAs tend to think that, say, advocating for communism is unlikely to be very effective. We’re not having communism any time soon, even if it is a good idea (I don’t think it is, of course).
This seems to me to ignore the real dynamics. EA do invest money into longtermist courses that are supposed to produce significant systemic change (an aligned AI would constitute significant systemic change) and the evidence for individual causes like for example huge OpenPhil investment in OpenAI is quite thin.
Your link does not support that “fact”. The page speaks about estimations.
In their own FAQ GiveWell says:
Your argument is that the subjective judgement of people, who you like and who themselves say they are very uncertain, is a striking fact.
This kind of epistemic laundering is either dishonest or ignorant.
Sam Bankman-Fried wasn’t just “inspired by the ideas of effective altruism”. He was one of the main case studies of 80,000 Hours who supposedly coached him into entering finance which he wouldn’t have without the help from 80,000 Hours. That was true even before he had a lot of money to donate.
This seems to me to ignore the real dynamics. EA do invest money into longtermist courses that are supposed to produce significant systemic change (an aligned AI would constitute significant systemic change) and the evidence for individual causes like for example huge OpenPhil investment in OpenAI is quite thin.