If you have multiple AI systems they just coordinate and look to the humans as if they were acting as a single agent (much in the same way as from the perspective of a wild animal encroaching into human territory, the humans behave much like a single organism in terms of coordinating their response). The decision theory Eliezer worked on is helpful for understanding these kinds of things (because e.g. standard decision theory would inaccurately predict that even very smart systems would end up in defect-defect equilibria).
If you have multiple AI systems they just coordinate and look to the humans as if they were acting as a single agent (much in the same way as from the perspective of a wild animal encroaching into human territory, the humans behave much like a single organism in terms of coordinating their response). The decision theory Eliezer worked on is helpful for understanding these kinds of things (because e.g. standard decision theory would inaccurately predict that even very smart systems would end up in defect-defect equilibria).