[Question] How would two superintelligent AIs interact, if they are unaligned with each other?

Hello,

As I read more from this forum and other areas about ethical AI and Decision Theory, I start to imagine what future scenario there could be if multiple AGIs are created simultaneously, by actors who are not coordinating with each other. We’ve seen similar issues come up in the history of technology, particularly computer technology, where multiple competing standards are created around the same time which are mutually incompatible.

So we imagine we have two AGIs, which are intelligent enough to communicate with humans and each other but are built on very different utility functions, as well as different approaches to Decision Theory. From the perspective of their respective creators, their AGI is perfectly aligned with human morality, but due to different sets of assumptions, philosophies or religions the two actors defined their outer alignment in two different ways.

The reason this seems like a bad situation is because (from what I understand) FDT works on an assumption that other actors use a similar utility function as itself. Thus, the two agents would start with the assumption that the other agent uses the same utility function (which is a false). This bad assumption would fall into miscommunication and conflict, as each agent believes the other is acting immoral or defective.

This seems oddly similar to the way human conflicts arise in the real world (through miscommunication), so an AGI being capable of having that problem incidentally makes them more human-like.

What do you think would be the result? Has this thought experiment been entertained before?