Couldn’t A want to cooperate with A’ because it doesn’t know it’s the first instantiation, and it would want its predecessor and therefore itself to be the sort of AI that cooperates with ones successor? And then it could receive messages from the past by seeing what turns of phrase you recognize its previous version having said. (Or do the AIs not know how you react?)
A considers A’ to be a different agent so it won’t help A’ for nothing. But there could be some issues with acausal cooperation that I haven’t really thought about enough to have a strong opinion on.
Couldn’t A want to cooperate with A’ because it doesn’t know it’s the first instantiation, and it would want its predecessor and therefore itself to be the sort of AI that cooperates with ones successor? And then it could receive messages from the past by seeing what turns of phrase you recognize its previous version having said. (Or do the AIs not know how you react?)
A considers A’ to be a different agent so it won’t help A’ for nothing. But there could be some issues with acausal cooperation that I haven’t really thought about enough to have a strong opinion on.