Do you mean that the set of possible objections I gave isn’t complete? If so, I didn’t mean to imply that it was.
For example, someone may think that superintelligences cannot arise quickly.
And therefore we’re powerless to do anything to prevent the default outcome? What about the Modest Superintelligences post that I linked to?
Or they may think that human improvement to our own intelligent will make us as effective superintelligences well before we solve the AI problem (because it is just that tricky).
If someone has a strong intuition to that effect, then I’d ask them to consider how to safely improve our own intelligence.
Do you mean that the set of possible objections I gave isn’t complete? If so, I didn’t mean to imply that it was.
And therefore we’re powerless to do anything to prevent the default outcome? What about the Modest Superintelligences post that I linked to?
If someone has a strong intuition to that effect, then I’d ask them to consider how to safely improve our own intelligence.