In the long run, potential for competence isn’t going to be different between AIs and originally-humans as both kinds of minds grow up. The two differences for AIs is that initially, there is no human uploading, and so even human-level early AGIs have AI advantages (speed, massive copying of individuals at a trivial cost, learning in parallel) while humans don’t. And at higher capabilities, there will be recipes for creating de novo AI superintelligences, while it might take a long time for individual early AGIs or originally-humans to grow up to the level of superintelligences while remaining themselves, in ways they personally endorse.
This second difference doesn’t help early AGIs with securing their own future civilization compared to the situation with humanity, so if superalignment is not solved, there is a similar concern about early AGIs still being allowed to grow up at their own pace (lack of permanent disempowerment for early AGIs), even if the world already has those de novo superintelligences managing the infrastructure.
Of course AI x-risk is in particular about superalignment not being solved while humans are in charge, so that superintelligences don’t take the interests of the future of humanity into account with nontrivial weight. So either it’s the early AGIs that solve superalignment, or even they are left behind, the same as humanity, as misaligned de novo superintelligences take over.
In the long run, potential for competence isn’t going to be different between AIs and originally-humans as both kinds of minds grow up. The two differences for AIs is that initially, there is no human uploading, and so even human-level early AGIs have AI advantages (speed, massive copying of individuals at a trivial cost, learning in parallel) while humans don’t. And at higher capabilities, there will be recipes for creating de novo AI superintelligences, while it might take a long time for individual early AGIs or originally-humans to grow up to the level of superintelligences while remaining themselves, in ways they personally endorse.
This second difference doesn’t help early AGIs with securing their own future civilization compared to the situation with humanity, so if superalignment is not solved, there is a similar concern about early AGIs still being allowed to grow up at their own pace (lack of permanent disempowerment for early AGIs), even if the world already has those de novo superintelligences managing the infrastructure.
Of course AI x-risk is in particular about superalignment not being solved while humans are in charge, so that superintelligences don’t take the interests of the future of humanity into account with nontrivial weight. So either it’s the early AGIs that solve superalignment, or even they are left behind, the same as humanity, as misaligned de novo superintelligences take over.