I think the key crux here is how long you expect the intervening time between when AGI has killed all humans to when it has gotten the hardware repair/manufacture supply chain running on its own to be. I think it’s fairly clear that the intervening time will be fairly short, independent of whether fully automated datacenters exist pre-AGI. Hardware failures in the intervening time will of course occur, but redundancy should be able to keep things running long enough to fix the problem—as an example, you may eventually need to physically swap out the drives in your cluster, but if you just need to keep things running for a decade without maintenance you can always spin down a large segment of the drives as standbys, increase the redundancy past 3 copies, etc. It also helps that the hardest parts to manufacture (the silicon) are also less likely to fail; typically macroscopic parts like spinning disks, fans, and capacitors are first to fail (granted, spinning disks are nontrivial to manufacture, but also can be phased out). Similar approaches can apply to other parts of the computer hardware/power infrastructure. The main reason I can imagine for arguing that the intervening time will not be short is that robotics(/nanotech) is hard and it may be more difficult for the AI to do research on robotics if it is not already bootstrapped. This ultimately boils down to whether you expect training in simulation to transfer well to the real world, whether you expect the AGI to be substantially better than humans at robotics research, whether existing robotics at the time of AGI takeoff will have the requisite hardware but only lack the software, etc.
I think independent of this argument, intentional gradual automation of humans is also a possibility, especially in slower-takeoff worlds. In particular, the AGI can always guide the world in such a way as to fulfill the conditions needed (i.e consider an AGI subtly nudging people to do more robotics research). I agree with the other commentors in that this does not at all guarantee that our replacements will be in any sense versions of ourselves.
I think the key crux here is how long you expect the intervening time between when AGI has killed all humans to when it has gotten the hardware repair/manufacture supply chain running on its own to be. I think it’s fairly clear that the intervening time will be fairly short, independent of whether fully automated datacenters exist pre-AGI. Hardware failures in the intervening time will of course occur, but redundancy should be able to keep things running long enough to fix the problem—as an example, you may eventually need to physically swap out the drives in your cluster, but if you just need to keep things running for a decade without maintenance you can always spin down a large segment of the drives as standbys, increase the redundancy past 3 copies, etc. It also helps that the hardest parts to manufacture (the silicon) are also less likely to fail; typically macroscopic parts like spinning disks, fans, and capacitors are first to fail (granted, spinning disks are nontrivial to manufacture, but also can be phased out). Similar approaches can apply to other parts of the computer hardware/power infrastructure. The main reason I can imagine for arguing that the intervening time will not be short is that robotics(/nanotech) is hard and it may be more difficult for the AI to do research on robotics if it is not already bootstrapped. This ultimately boils down to whether you expect training in simulation to transfer well to the real world, whether you expect the AGI to be substantially better than humans at robotics research, whether existing robotics at the time of AGI takeoff will have the requisite hardware but only lack the software, etc.
I think independent of this argument, intentional gradual automation of humans is also a possibility, especially in slower-takeoff worlds. In particular, the AGI can always guide the world in such a way as to fulfill the conditions needed (i.e consider an AGI subtly nudging people to do more robotics research). I agree with the other commentors in that this does not at all guarantee that our replacements will be in any sense versions of ourselves.