Yes, I imagine that powerful agents could eventually adopt clean (easy to reason about) decision theories, simulate other agents until they also adopt clean decision theories, and then they can reason about things like, “If I decide to X, that logically implies these other agents making decisions Y and Z”.
(Except it can’t be this simple, because this runs into problems with commitment races, e.g., while I’m simulating another agent, they suspect this and as a result make a bunch of commitments that give themselves more bargaining power. But something like this, more sophisticated in some way, might turn out to work.)
Yes, I imagine that powerful agents could eventually adopt clean (easy to reason about) decision theories, simulate other agents until they also adopt clean decision theories, and then they can reason about things like, “If I decide to X, that logically implies these other agents making decisions Y and Z”.
(Except it can’t be this simple, because this runs into problems with commitment races, e.g., while I’m simulating another agent, they suspect this and as a result make a bunch of commitments that give themselves more bargaining power. But something like this, more sophisticated in some way, might turn out to work.)