It’s also probably 40-100 hours of work, and there are many other urgent things for us to do as well.
Absolutely. As I said in the first place, I hadn’t initially intended to reply to this, as I didn’t think my reactions were likely to be helpful given the situation you’re in. But your followup comment seemed more broadly interested in what people might have found compelling, and less in specific actionable suggestions, than your original post. So I decided to share my thoughts on the former question.
I totally agree that you might not have the wherewithal to do the things that people might find compelling, and I understand how frustrating that is.
It might help emotionally to explicitly not-expect that convincing people to donate large sums of money to your organization is necessarily something that you, or anyone, are able to do with a human amount of effort. Not that this makes the problem any easier, but it might help you cope better with the frustration of being expected to put forth an amount of effort that feels unreasonably superhuman.
Or it might not.
Instead, the reasons I gave (twice!) were: [..]
I’ll observe that the bulk of the text you quote here is not reasons to believe SI is capable of it, but reasons to believe the task is difficult. What’s potentially relevant to the former question is:
SI has successfully concentrated lots of attention, donor support, and human capital. Also, SI has learned many lessons [and] has lots of experience with these issues;
If that is your primary answer to “Why should I believe SI is capable of mitigating x-risk given $?”, then you might want to show why the primary obstacles to mitigating x-risk are psychological/organizational issues rather than philosophical/technical ones, such that SI’s competence at addressing the former set is particularly relevant. (And again, I’m not asserting that showing this is something you are able to do, or ought to be able to do. It might not be. Heck, the assertion might even be false, in which case you actively ought not be able to show it.)
You might also want to make more explicit the path from “we have experience addressing these psychological/organizational issues” to “we are good at addressing these psychological/organizational issues (compared to relevant others)”. Better still might be to focus your attention on demonstrating the latter and ignore the former altogether.
My statement “SI has successfully concentrated lots of attention, donor support, and human capital [and also] has learned many lessons [and] has lots of experience with [these unusual, complicated] issues” was in support of “better to help SI grow and improve rather than start a new, similar AI risk reduction organization”, not in support of “SI is capable of mitigating x-risk given money.”
However, if I didn’t also think SI was capable of reducing x-risk given money, then I would leave SI and go do something else, and indeed will do so in the future if I come to believe that SI is no longer capable of reducing x-risk given money. How to Purchase AI Risk Reduction is a list of things that (1) SI is currently doing to reduce AI risk, or that (2) SI could do almost immediately (to reduce AI risk) if it had sufficient funding.
My statement [..] was in support of “better to help SI grow and improve rather than start a new, similar AI risk reduction organization”, not in support of “SI is capable of mitigating x-risk given money.”
Ah, OK. I misunderstood that; thanks for the clarification. For what it’s worth, I think the case for “support SI >> start a new organization on a similar model” is pretty compelling.
And, yes, the “How to Purchase AI Risk Reduction” series is an excellent step in the direction of making SI’s current and planned activities, and how they relate to your mission, more concrete and transparent. Yay you!
Absolutely. As I said in the first place, I hadn’t initially intended to reply to this, as I didn’t think my reactions were likely to be helpful given the situation you’re in. But your followup comment seemed more broadly interested in what people might have found compelling, and less in specific actionable suggestions, than your original post. So I decided to share my thoughts on the former question.
I totally agree that you might not have the wherewithal to do the things that people might find compelling, and I understand how frustrating that is.
It might help emotionally to explicitly not-expect that convincing people to donate large sums of money to your organization is necessarily something that you, or anyone, are able to do with a human amount of effort. Not that this makes the problem any easier, but it might help you cope better with the frustration of being expected to put forth an amount of effort that feels unreasonably superhuman.
Or it might not.
I’ll observe that the bulk of the text you quote here is not reasons to believe SI is capable of it, but reasons to believe the task is difficult. What’s potentially relevant to the former question is:
If that is your primary answer to “Why should I believe SI is capable of mitigating x-risk given $?”, then you might want to show why the primary obstacles to mitigating x-risk are psychological/organizational issues rather than philosophical/technical ones, such that SI’s competence at addressing the former set is particularly relevant. (And again, I’m not asserting that showing this is something you are able to do, or ought to be able to do. It might not be. Heck, the assertion might even be false, in which case you actively ought not be able to show it.)
You might also want to make more explicit the path from “we have experience addressing these psychological/organizational issues” to “we are good at addressing these psychological/organizational issues (compared to relevant others)”. Better still might be to focus your attention on demonstrating the latter and ignore the former altogether.
Thank you for understanding. :)
My statement “SI has successfully concentrated lots of attention, donor support, and human capital [and also] has learned many lessons [and] has lots of experience with [these unusual, complicated] issues” was in support of “better to help SI grow and improve rather than start a new, similar AI risk reduction organization”, not in support of “SI is capable of mitigating x-risk given money.”
However, if I didn’t also think SI was capable of reducing x-risk given money, then I would leave SI and go do something else, and indeed will do so in the future if I come to believe that SI is no longer capable of reducing x-risk given money. How to Purchase AI Risk Reduction is a list of things that (1) SI is currently doing to reduce AI risk, or that (2) SI could do almost immediately (to reduce AI risk) if it had sufficient funding.
Ah, OK. I misunderstood that; thanks for the clarification.
For what it’s worth, I think the case for “support SI >> start a new organization on a similar model” is pretty compelling.
And, yes, the “How to Purchase AI Risk Reduction” series is an excellent step in the direction of making SI’s current and planned activities, and how they relate to your mission, more concrete and transparent. Yay you!