Aside from the layer one security considerations, if you can define a minimum set of requirements for safe AI and a clear chain of escalation with defined responses, you can eventually program this into AI itself, pending a solution to alignment. At a certain level of AI development, AI safety becomes self enforcing. At that point the disincentives should be towards non-networked compute capacity, at least beyond the threshold needed for strong AI. At the point at which AI safety becomes self enforcing, the security requirement for state ownership only should become relaxable, albeit within definable limits, and pending control of manufacturing according to security compliant demands. Since manufacturing is physical and capital intensive this is probably fairly easy to achieve, at least when compared to AI alignment itself.
Aside from the layer one security considerations, if you can define a minimum set of requirements for safe AI and a clear chain of escalation with defined responses, you can eventually program this into AI itself, pending a solution to alignment. At a certain level of AI development, AI safety becomes self enforcing. At that point the disincentives should be towards non-networked compute capacity, at least beyond the threshold needed for strong AI. At the point at which AI safety becomes self enforcing, the security requirement for state ownership only should become relaxable, albeit within definable limits, and pending control of manufacturing according to security compliant demands. Since manufacturing is physical and capital intensive this is probably fairly easy to achieve, at least when compared to AI alignment itself.