I apologize if this is covered by basic decision theory, but if we additionally assume:
the choice in our universe is made by a perfectly rational optimization process instead of a human
the paperclip maximizer is also a perfect rationalist, albeit with a very different utility function
each optimization process can verify the rationality of the other
then won’t each side choose to cooperate, after correctly concluding that it will defect iff the other does?
Each side’s choice necessarily reveals the other’s; they’re the outputs of equivalent computations.
I apologize if this is covered by basic decision theory, but if we additionally assume:
the choice in our universe is made by a perfectly rational optimization process instead of a human
the paperclip maximizer is also a perfect rationalist, albeit with a very different utility function
each optimization process can verify the rationality of the other
then won’t each side choose to cooperate, after correctly concluding that it will defect iff the other does?
Each side’s choice necessarily reveals the other’s; they’re the outputs of equivalent computations.