Moreover I think there is more to it than meets the eye.
The question we need to ask is: Can only Superintelligent AI be able to escape AI labs and self-replicate itself on other servers?
I don’t think so. I think a powerful AI capable enough in hacking and self-replication (preferably undetected to monitors) is sufficient for it to bypass an AI labs’ security systems and escape those servers. In other words I mean to say is that not just Superintelligent AI, even pre-superintelligent AIs might be able to escape the servers of AI companies and self-replicate itself. In other words AIs narrowly superintelligent in hacking (compared with security systems put in place to contain them) and meaningful self-replication in capability is enough for them to escape the servers of AI labs.
These AI models currently do show the will to resist shutdown and self-replicate in certain settings (although right now how much I had read in researches points that AI models are not able to fully meaningfully replicate its weights right now but that could change in future as AI models become more capable.)
Also if somehow humans are able to shutdown distributed systems where a powerful (non-superintelligent) AI has replicated itself or is trying to replicate itself (think of shutting down targeted nodes by some kindof consensus between node runners by detecting and monitoring where suddenly the volume of data equivalent of weights data size has spiked up) maybe possible but I am not very sure about this. This would also be highly dependent on who are running these distributed systems and what kindof consensus is there between these nodes and how decentralized or centralized they really are. We may never be able to shutdown truly decentralized distributed systems but potentially centralized ones we might be able to.
Hehe, I think I would again choose to kindly disagree here. These terms I don’t think are meaningfully applicable to AI systems. AI systems are still non living things however Superintelligent they may become. They can simply never truly have emotions like “hatred, attachment, greed, lust, etc” that living things like animals have. Any sign representing these emotions in them would simply be an illusion to us.
But if you insist if there is something akin to enlightenment or wisdom in them, then I seriously think we will need to become AI ourselves to truly understand what enlightenment or wisdom truly means for it which is impossible.