Second, it seems your problem isn’t with an intelligence explosion as a risk all on its own, but rather as a risk among other risks, one that is farther from being solved (both in terms of work done and in resolvability), and so this post could use a better title, i.e., “Why an Intelligence Explosion is a Low-Priority Global Risk”, which does not a priori exclude SIAI from potential donation targets. If I’m wrong about this, and you would consider it a low-priority thing to get rid of the global risk from an intelligence explosion even aside from other global risks, I’ll have to ask for an explanation.
Edit: It seems my comment has been noted and the title of the post changed.
Before I dive into this material in depth, a few thoughts:
First, I want to sincerely congratulate you on being (it seems to me) the first in our tribe to dissent.
Second, it seems your problem isn’t with an intelligence explosion as a risk all on its own, but rather as a risk among other risks, one that is farther from being solved (both in terms of work done and in resolvability), and so this post could use a better title, i.e., “Why an Intelligence Explosion is a Low-Priority Global Risk”, which does not a priori exclude SIAI from potential donation targets. If I’m wrong about this, and you would consider it a low-priority thing to get rid of the global risk from an intelligence explosion even aside from other global risks, I’ll have to ask for an explanation.
Edit: It seems my comment has been noted and the title of the post changed.