When it comes to AI regulation, a certain train of thought comes to my mind:
Because a superintelligent AI has never existed, we can assume that creating one requires an enormous amount of energy and resources.
Due to global inequality, certain regions of the world have exponentially more access to energy and resources than other regions.
Therefore, when creating an AGI becomes possible, only a couple of regions of the world (and only a small number of people in these regions) will have the capability of doing so.
Therefore, enforcement of AI regulations only has to focus on this very limited population, and educate them on the existential threat of UFAI.
I think it is best to consider it analogous with another man-made existential threat, nuclear weapons. True, there is always a concern of a leak in international regulations (the Soviet nuclear arsenal that disappeared with the fall of the USSR, for example), but generally speaking there is a great filter of cost (such as procuring and refining uranium, training and educating domestic nuclear research, etc.) such that only a handful of nations in the world have ever built such weapons.