I agree that the AI x-risk gets high way earlier than “when AGI fooms and becomes superhuman”. I suspect there is a phase transition where scaling up without extra high intelligence or self-modification makes it possible for the machine to completely overwhelm humans.
The good news is, it is possible to do meaningful research in that area, and to build the safeguards, and to have multiple attempts, unlike in the doomed one-shot superintelligent AI alignment case.
I agree that the AI x-risk gets high way earlier than “when AGI fooms and becomes superhuman”. I suspect there is a phase transition where scaling up without extra high intelligence or self-modification makes it possible for the machine to completely overwhelm humans.
The good news is, it is possible to do meaningful research in that area, and to build the safeguards, and to have multiple attempts, unlike in the doomed one-shot superintelligent AI alignment case.