I think I made a mistake about your assumptions. I interpreted the parties in your original comment as superhuman non-ASI versions of the human galaxy owners rather than themselves ASIs.
Let’s see if I understand your reply correctly. You posit that the participants will be able to design a mechanism that from N levels upstream (factories of factories) recursively creates honest audit systems. For example, an evil participant may wish to construct an audit system that is absolutely reliable for trade and anything else but allows them to create, hide, and erase all traces of creating torture simulations (once). If the ASIs are able to prevent this, then they can escape game-theoretic traps and enjoy cooperation without centralizing.
I think I made a mistake about your assumptions. I interpreted the parties in your original comment as superhuman non-ASI versions of the human galaxy owners rather than themselves ASIs.
Let’s see if I understand your reply correctly. You posit that the participants will be able to design a mechanism that from N levels upstream (factories of factories) recursively creates honest audit systems. For example, an evil participant may wish to construct an audit system that is absolutely reliable for trade and anything else but allows them to create, hide, and erase all traces of creating torture simulations (once). If the ASIs are able to prevent this, then they can escape game-theoretic traps and enjoy cooperation without centralizing.