Given that acausal decision theory and especially acausal game theory are very much unsolved problems, I think we don’t really have much idea of what the acausal economy looks like. It seems totally plausible to me that it’s a dog-eat-dog world of superintelligences with more sophisticated decision theories competing with each other to exploit civilizations with less sophisticated decision theories, e.g., by winning commitment races. Given this, it’s not a slam dunk that giving unaligned AIs better decision theories is a good idea, or that “even if we lose, we win”.
Given that acausal decision theory and especially acausal game theory are very much unsolved problems, I think we don’t really have much idea of what the acausal economy looks like. It seems totally plausible to me that it’s a dog-eat-dog world of superintelligences with more sophisticated decision theories competing with each other to exploit civilizations with less sophisticated decision theories, e.g., by winning commitment races. Given this, it’s not a slam dunk that giving unaligned AIs better decision theories is a good idea, or that “even if we lose, we win”.