Help me understand: how do multiverse acausal trades work?

While I’m intrigued by the idea of acausal trading, I confess that so far I fail to see how they make sense in practice. Here I share my (unpolished) musings, in the hopes that someone can point me to a stronger (mathematically rigorous?) defense of the idea. Specifically, I’ve heard the claim that AI Safety should consider acausal trades over a Tegmarkian multiverse, and I want to know if there is any validity to this.

Basically, I in Universe A want to trade with some agent that I imagine to live in some other Universe B, who similarly imagines me. Suppose I really like the idea of filling the multiverse with triangles. Then maybe I can do something in A that this agent likes; in return, it goes on to make triangles in B.

Problem 1: There’s no Darwinian selective pressure to favor agents who engage in acausal trades. Eventually, natural selection will just eliminate agents who waste even a small fraction of their resources on these trades, rendering the concept irrelevant to a descriptive theory of rationality or morality. To the extent that we do value multiverse happiness, it should be treated as a misgeneralization of more useful forms of morality, persisting only because acausal trades never occurred to our ancestors.

Defense 1a: Ok maybe instead of inducing the agent to make triangles in B, I induce it to build copies of me in B. Then surely, on a multiverse scale, I’m being selected for? Well not quite: selection in the long term is not about sheer numbers but about survival vs extinction, and here I’m still going extinct in Universe A, which likely also makes my trades worthless for B.

Defense 1b: Ok even if caring about acausal trades is a misgeneralization in evolutionary terms, since we care about the multiverse, shouldn’t we ensure that the ASI does too? Maybe a sufficiently powerful ASI can forever resist selection pressures, but this sounds highly speculative to me.

Problem 2: A more critical issue is that for every Universe B that rewards us for doing X, there’s another Universe C that rewards us for not doing X. How do we reason about which of B or C to assign more weight? Solomonoff induction? One of my research projects (please stay tuned!) is a rigorous defense of Solomonoff induction, but the defense I have in mind merely argues that Solomonoff induction predicts better than other algorithms. It stops short of treating it as an objective measure over possible worlds. If anything, it actually suggests the opposite: my argument presents probabilistic beliefs as essentially emergent properties of successful predictors. Since these multiverse beliefs are irrelevant to prediction, the idea of a probability measure over universes seems ill-defined. Moreover, Solomonoff induction requires a reference UTM, and my previous paper suggests this depends on the laws of physics. Such a universe-dependent measure lacks objective meaning in a true multiverse setting.

So what do you think: does multiverse trading really work?