In a world that has ASI, a much better way of maintaining the integrity of the audit system by building it to be intelligent enough to tell whether it’s being fooled, and with a desire of its own to stay neutral. Which I guess is like being multistakeholder, since you both will have signed off on its design.
But in such a world, the audit system would be a feature of the brain of the local authorities. You would co-design yourselves in such a way that you have the ability to make binding promises (or if you’re precious about your design, co-design your factories in such a way that they have the ability to verify that your design can make binding promises (or co-design your factory factories to …)). This makes you a better/viable at all trading partner. You have the option of not using it except when it benefits you. But having it means that they can simply ask you whether your galaxy contains any optimal 17 square packings, and you send them an attestation that no when you need to pack 17 squares you’re using the socially acceptable symmetrical, suboptimal packings, and if it has a certain signature then they know you weren’t capable of faking this message.
I think I made a mistake about your assumptions. I interpreted the parties in your original comment as superhuman non-ASI versions of the human galaxy owners rather than themselves ASIs.
Let’s see if I understand your reply correctly. You posit that the participants will be able to design a mechanism that from N levels upstream (factories of factories) recursively creates honest audit systems. For example, an evil participant may wish to construct an audit system that is absolutely reliable for trade and anything else but allows them to create, hide, and erase all traces of creating torture simulations (once). If the ASIs are able to prevent this, then they can escape game-theoretic traps and enjoy cooperation without centralizing.
In a world that has ASI, a much better way of maintaining the integrity of the audit system by building it to be intelligent enough to tell whether it’s being fooled, and with a desire of its own to stay neutral. Which I guess is like being multistakeholder, since you both will have signed off on its design.
But in such a world, the audit system would be a feature of the brain of the local authorities. You would co-design yourselves in such a way that you have the ability to make binding promises (or if you’re precious about your design, co-design your factories in such a way that they have the ability to verify that your design can make binding promises (or co-design your factory factories to …)). This makes you a better/viable at all trading partner. You have the option of not using it except when it benefits you. But having it means that they can simply ask you whether your galaxy contains any optimal 17 square packings, and you send them an attestation that no when you need to pack 17 squares you’re using the socially acceptable symmetrical, suboptimal packings, and if it has a certain signature then they know you weren’t capable of faking this message.
You really don’t want to lack this ability.
I think I made a mistake about your assumptions. I interpreted the parties in your original comment as superhuman non-ASI versions of the human galaxy owners rather than themselves ASIs.
Let’s see if I understand your reply correctly. You posit that the participants will be able to design a mechanism that from N levels upstream (factories of factories) recursively creates honest audit systems. For example, an evil participant may wish to construct an audit system that is absolutely reliable for trade and anything else but allows them to create, hide, and erase all traces of creating torture simulations (once). If the ASIs are able to prevent this, then they can escape game-theoretic traps and enjoy cooperation without centralizing.