the utility maximizing choice
The critical question is, whose utility?
The Aumann theorem will not help here since the FAIs will start with different values and different priors.
Each AI tries to maximize their own utility, of course. When they consider merging, they make an estimate: how much of the original utility can I expect to get after we both self-modify to maximize the new utility function.
Each AI tries to maximize their own utility, of course.
Then each AI makes its own choice and the two choices might well turn out to be incompatible.
There is also the issue of information exchange—basically, it will be hard for the two AIs to trust each other.
The critical question is, whose utility?
The Aumann theorem will not help here since the FAIs will start with different values and different priors.
Each AI tries to maximize their own utility, of course. When they consider merging, they make an estimate: how much of the original utility can I expect to get after we both self-modify to maximize the new utility function.
Then each AI makes its own choice and the two choices might well turn out to be incompatible.
There is also the issue of information exchange—basically, it will be hard for the two AIs to trust each other.