Addressing Problem 1:
One unspoken assumption that acausal trade makes is that it only takes a “finite” amount of time to model every possible other agent and all their probabilities of occurring, while the multiverse is infinite. Therefore, if you are an agent with an infinite time horizon and zero time discount factor for your reward function, then modelling all of those probabilities becomes a worthwhile investment. (I disagree with this assumption but I have never read an Acausal Trade argument that didn’t make it). Once you assume this then it makes more sense: The agent is already winning in universe A anyway, so it slightly weakens its grip on it in order to extend its influence into other universes. In evolutionary terms: It’s spending its energy on reproduction.
Addressing Problem 2:
I fully agree. However, I would also point out that just because probabilities don’t have an objective definition in these scenarios doesn’t mean that an entity won’t arise that optimizes over it anyway, out of misgeneralization. This is neither right or wrong. It’s just a thing that will happen when you have an entity that thinks in terms of probabilities and it finds out that the basis of its thought patterns (probability theory) is actually ill-defined. It’s either that or it goes mad.
If you are taking an evolutionary approach, some ideas come to mind:
From a “global” perspective, the multiverse’s evolutionary winner is probably something ridiculously small that happened to arise in a universe with low complexity and therefore high runtime. It’s kind of silly to think about, but there is an ant-like entity out there that outperforms godlike AIs. You might say that doesn’t matter because that universe is causally isolated from the rest, so what do we care. But if we take this perspective then it points us in a direction that is useful for addressing problem 2 better: We care about possible universes that plausibly could be directly causally entangled with our universe even if they don’t appear so at first glance. Take this with even more grains of salt than the rest of this post, but to me it means that Acausal Trade makes sense when it is done with entities like our own hypothetical AI descendants. Those are much easier to define and understand than hypothetical mathematically abstract agents. We can think about their motivations in little time, and we can in fact simulate them directly because we are already doing it. It’s much easier to determine the source code of a hypothetical agent in another universe if you are the one who wrote the source code of its ancestor.
If we go from “Acausal Trade with the set of all possible agents” to “Acausal Trade with the set of agents we are actually likely to encounter because we already have good reasons to know them” then it becomes much less impractical.
“Reproducing in another Universe” is a tricky concept! I feel like simple beings that succeed in this manner should be thought of as memes from the perspective of Universes like A that instantiate them. Their presence in B is kind of irrelevant: maybe A instantiates the agents because of some trade in B, but A is free to place pretty much arbitrary weights on other Universes and the preferences therein. Given this ambiguity, we might as well remove one step and just say that A likes the B agent for some unstated arbitrary reason, without specific mention of trades. We could view Conway glider guns as a popular meme from the multiverse, but what use is that?
I’m reminded of Samuel Alexander’s thought experiment, in which Earth has a one-way portal to Paradise. Perhaps most people would take this portal initially; however, from the perspective of Earth’s ecosystem, entering this portal is equivalent to death. Therefore, after enough natural selection, we should expect that beings on Earth will treat the portal with the same degree of fear and avoidance as death, even if they can clearly see Paradise on the other side. Arguably, we already find ourselves in this situation with respect to our logical continuation in the blissful afterlife of many religions.
Ultimately, I feel that a multiverse trade only provides benefits in a Universe of our own imagination, which may be said to exist in some logical sense, but lacks an objective measure relative to all the other worlds that we could (or could not) have imagined. And in some of these worlds, the trade would instead be detrimental!
Addressing Problem 1: One unspoken assumption that acausal trade makes is that it only takes a “finite” amount of time to model every possible other agent and all their probabilities of occurring, while the multiverse is infinite. Therefore, if you are an agent with an infinite time horizon and zero time discount factor for your reward function, then modelling all of those probabilities becomes a worthwhile investment. (I disagree with this assumption but I have never read an Acausal Trade argument that didn’t make it). Once you assume this then it makes more sense: The agent is already winning in universe A anyway, so it slightly weakens its grip on it in order to extend its influence into other universes. In evolutionary terms: It’s spending its energy on reproduction.
Addressing Problem 2: I fully agree. However, I would also point out that just because probabilities don’t have an objective definition in these scenarios doesn’t mean that an entity won’t arise that optimizes over it anyway, out of misgeneralization. This is neither right or wrong. It’s just a thing that will happen when you have an entity that thinks in terms of probabilities and it finds out that the basis of its thought patterns (probability theory) is actually ill-defined. It’s either that or it goes mad.
If you are taking an evolutionary approach, some ideas come to mind: From a “global” perspective, the multiverse’s evolutionary winner is probably something ridiculously small that happened to arise in a universe with low complexity and therefore high runtime. It’s kind of silly to think about, but there is an ant-like entity out there that outperforms godlike AIs. You might say that doesn’t matter because that universe is causally isolated from the rest, so what do we care. But if we take this perspective then it points us in a direction that is useful for addressing problem 2 better: We care about possible universes that plausibly could be directly causally entangled with our universe even if they don’t appear so at first glance. Take this with even more grains of salt than the rest of this post, but to me it means that Acausal Trade makes sense when it is done with entities like our own hypothetical AI descendants. Those are much easier to define and understand than hypothetical mathematically abstract agents. We can think about their motivations in little time, and we can in fact simulate them directly because we are already doing it. It’s much easier to determine the source code of a hypothetical agent in another universe if you are the one who wrote the source code of its ancestor.
If we go from “Acausal Trade with the set of all possible agents” to “Acausal Trade with the set of agents we are actually likely to encounter because we already have good reasons to know them” then it becomes much less impractical.
“Reproducing in another Universe” is a tricky concept! I feel like simple beings that succeed in this manner should be thought of as memes from the perspective of Universes like A that instantiate them. Their presence in B is kind of irrelevant: maybe A instantiates the agents because of some trade in B, but A is free to place pretty much arbitrary weights on other Universes and the preferences therein. Given this ambiguity, we might as well remove one step and just say that A likes the B agent for some unstated arbitrary reason, without specific mention of trades. We could view Conway glider guns as a popular meme from the multiverse, but what use is that?
I’m reminded of Samuel Alexander’s thought experiment, in which Earth has a one-way portal to Paradise. Perhaps most people would take this portal initially; however, from the perspective of Earth’s ecosystem, entering this portal is equivalent to death. Therefore, after enough natural selection, we should expect that beings on Earth will treat the portal with the same degree of fear and avoidance as death, even if they can clearly see Paradise on the other side. Arguably, we already find ourselves in this situation with respect to our logical continuation in the blissful afterlife of many religions.
Ultimately, I feel that a multiverse trade only provides benefits in a Universe of our own imagination, which may be said to exist in some logical sense, but lacks an objective measure relative to all the other worlds that we could (or could not) have imagined. And in some of these worlds, the trade would instead be detrimental!