I was wondering—what fraction of people here agree with Holden’s advice regarding donations
Prior to reading Holden’s article, I my last charitable donation had been to an organization working on fighting malaria recommended by Give Well, and I was tentatively planning on following Give Well’s recommendations for future charitable giving. In that sense, I already agreed with Holden, though was semi-agnostic on what was actually the best use of my money.
It seemed to me that the payoff from donating to the Singularity Institute was highly uncertain, whereas the payoff from donating to an organization that can get results in the near-future is much clearer.
Furthermore, I suspect whatever the Singularity Institute does now is likely to have little impact, relative to the impact of future work on AI safety that will happen once powerful AI is much nearer.
and his arguments?
Objections 1 and 2 seemed to me very plausible, though I haven’t gotten to read much of the discussion of them that’s happened here, so I have fairly low confidence in that assessment.
What fraction assumes there is a good chance he is essentially correct?
Good chance? Oh definitely.
What fraction finds it necessary to determine whenever Holden is essentially correct in his assessment, before working on counter argumentation, acknowledging that such investigation should be able to result in dissolution or suspension of SI?
It would seem to me, from the response, that the chosen course of action is to try to improve the presentation of the argument, rather than to try to verify truth values of the assertions (with the non-negligible likelihood of assertions being found false instead). This strikes me as very odd stance.
Ultimately: why SI seems certain that it has badly presented some valid reasoning, rather than tried to present some invalid reasoning?
I’m not “SI,” nor am I certain that the SI has merely presented valid reasoning badly. However, trying to articulate the arguments more clearly seems to me a worthwhile endeavor. Explaining ideas clearly is really, really, hard work, so it seems to me there’s a significant (though hardly certain) chance that the SI has just done a bad job of explaining it’s ideas.
Prior to reading Holden’s article, I my last charitable donation had been to an organization working on fighting malaria recommended by Give Well, and I was tentatively planning on following Give Well’s recommendations for future charitable giving. In that sense, I already agreed with Holden, though was semi-agnostic on what was actually the best use of my money.
It seemed to me that the payoff from donating to the Singularity Institute was highly uncertain, whereas the payoff from donating to an organization that can get results in the near-future is much clearer.
Furthermore, I suspect whatever the Singularity Institute does now is likely to have little impact, relative to the impact of future work on AI safety that will happen once powerful AI is much nearer.
Objections 1 and 2 seemed to me very plausible, though I haven’t gotten to read much of the discussion of them that’s happened here, so I have fairly low confidence in that assessment.
Good chance? Oh definitely.
I’m not “SI,” nor am I certain that the SI has merely presented valid reasoning badly. However, trying to articulate the arguments more clearly seems to me a worthwhile endeavor. Explaining ideas clearly is really, really, hard work, so it seems to me there’s a significant (though hardly certain) chance that the SI has just done a bad job of explaining it’s ideas.