I’m not being hopeful, I think this hypothetical involves a less merciful takeover than otherwise, because the AIs that take over are not superintelligent, and so unable to be as careful about the outcomes as they might want. In any case there’s probably at least permanent disempowerment, but a non-superintelligent takeover makes literal extinction (or a global catastrophe short of extinction) more likely (for AIs with the same values).
I’m not being hopeful, I think this hypothetical involves a less merciful takeover than otherwise, because the AIs that take over are not superintelligent, and so unable to be as careful about the outcomes as they might want. In any case there’s probably at least permanent disempowerment, but a non-superintelligent takeover makes literal extinction (or a global catastrophe short of extinction) more likely (for AIs with the same values).