Help me understand: how do multiverse acausal trades work?
While I’m intrigued by the idea of acausal trading, I confess that so far I fail to see how they make sense in practice. Here I share my (unpolished) musings, in the hopes that someone can point me to a stronger (mathematically rigorous?) defense of the idea. Specifically, I’ve heard the claim that AI Safety should consider acausal trades over a Tegmarkian multiverse, and I want to know if there is any validity to this.
Basically, I in Universe A want to trade with some agent that I imagine to live in some other Universe B, who similarly imagines me. Suppose I really like the idea of filling the multiverse with triangles. Then maybe I can do something in A that this agent likes; in return, it goes on to make triangles in B.
Problem 1: There’s no Darwinian selective pressure to favor agents who engage in acausal trades. Eventually, natural selection will just eliminate agents who waste even a small fraction of their resources on these trades, rendering the concept irrelevant to a descriptive theory of rationality or morality. To the extent that we do value multiverse happiness, it should be treated as a misgeneralization of more useful forms of morality, persisting only because acausal trades never occurred to our ancestors.
Defense 1a: Ok maybe instead of inducing the agent to make triangles in B, I induce it to build copies of me in B. Then surely, on a multiverse scale, I’m being selected for? Well not quite: selection in the long term is not about sheer numbers but about survival vs extinction, and here I’m still going extinct in Universe A, which likely also makes my trades worthless for B.
Defense 1b: Ok even if caring about acausal trades is a misgeneralization in evolutionary terms, since we care about the multiverse, shouldn’t we ensure that the ASI does too? Maybe a sufficiently powerful ASI can forever resist selection pressures, but this sounds highly speculative to me.
Problem 2: A more critical issue is that for every Universe B that rewards us for doing X, there’s another Universe C that rewards us for not doing X. How do we reason about which of B or C to assign more weight? Solomonoff induction? One of my research projects (please stay tuned!) is a rigorous defense of Solomonoff induction, but the defense I have in mind merely argues that Solomonoff induction predicts better than other algorithms. It stops short of treating it as an objective measure over possible worlds. If anything, it actually suggests the opposite: my argument presents probabilistic beliefs as essentially emergent properties of successful predictors. Since these multiverse beliefs are irrelevant to prediction, the idea of a probability measure over universes seems ill-defined. Moreover, Solomonoff induction requires a reference UTM, and my previous paper suggests this depends on the laws of physics. Such a universe-dependent measure lacks objective meaning in a true multiverse setting.
So what do you think: does multiverse trading really work?
Problem 1 is wrong objection.
CDT agents are not capable to cooperate in Prisoner’s dilemma, therefore, they are selected out. EDT agents are not capable to refuse to pay in XOR blackmail (or, symmetrically, pay in Parfit’s hitchhiker), therefore, they are selected out.
I think you will be interested in this paper.
Yeah, agents incapable of acausal cooperation are already being selected out: Most of the dominant nations and corporations are to some degree internally transparent, or bound by public rules or commitments, which is sufficient for engaging in acausal trade. This will only become more true over time: Trustworthiness is profitable, a person who can’t keep a promise is generally an undesirable trading partner, and artificial minds are much easier to make transparent and committed than individual humans and even organisations of humans are.
Also, technological (or post-biological) eras might just not have ongoing darwinian selection. Civilisations that fail to seize control of their own design process wont be strong enough to have a seat at the table, those at the table will be equipped with millions of years of advanced information technology, cryptography and game theory, perfect indefinite coordination will be a solved problem. I can think of ways this could break down, but they don’t seem like the likeliest outcomes.
Thanks, I had been hoping to see an evolutionary analysis of decision theories, so I’ll check out the paper sometime! Whichever decision theory turns out to be evolutionarily optimal, I imagine it still won’t engage in multiverse trade; does the paper disagree?
We can send space ship beyond event horizon and still care about what is going to happen on it after it crosses event horizon, despite this being utterly irrelevant to our genetic fitness in causal sense. If we are capable to develop such preferences, I don’t see any strong reason to develop strongly-monoverse decision theory.
Multiversal acausal trading is just logical consequence of LDT and I expect majority of powerful agents to have LDT-style decision theory, not LDT-but-without-multiverse decision theory.
Hm I think LDT must be fleshed out in more detail, to clarify which consequences follow or which generalizations are most natural. Arguing from selection seems like a powerful tool here; nonetheless, this seems like a difficult project. Suppose you live in a Universe where you often get cloned with mutations, and made to play prisoner’s dilemmas against your imperfect copies; how much correlation does the most successful version of LDT assign between the two competing policies? The full theory must deal with even more general scenarios.
I don’t think it really works, for similar reasons: https://www.lesswrong.com/posts/y3zTP6sixGjAkz7xE/pitfalls-of-building-udt-agents
I also share your intuition that there is no objective prior on the mathematical multiverse. Additionally I am not convinced we should care about (other universes in) the mathematical multiverse.
Certainly not clear to me that acausal trade works but I don’t think these problems are correct.
Consider a post-selection state — a civilization has stable control over a fixed amount of resources in its universe
idk but feels possible (and just a corollary of the model the distribution of other civilizations that want to engage in acausal trade problem)
I wrote another post specifically arguing for the selection-based view of rationality, and opening the floor to alternatives!
Acausal trades almost certainly don’t work.
There are more possible agents than atom-femtoseconds in the universe (to put it mildly), so if you devote even one femtosecond of one atom to modelling the desires of any given acausal agent then you are massively over-representing that agent.
The best that is possible is some sort of averaged distribution, and even then it’s only worth modelling agents capable of conducting acausal trade with you—but not you in particular. Just you in the sense of an enormously broad reference class in which you might be placed by agents like them.
Given even an extremely weak form of orthogonality thesis, the net contribution of your entire reference class will be as close to zero as makes no difference—not even enough to affect one atom (or some other insignificantly small equivalent in other physics). If orthogonality doesn’t hold even slightly, then you already know that your desires are reflected in the other reference classes and acausal trade is irrelevant.
So the only case that is left is one in which you know that orthogonality almost completely fails, and there are only (say) 10^1 to 10^30 or so reasonably plausible sets of preferences for sufficiently intelligent agents instead of the more intuitively expected 10^10000000000000000000 or more. This is an extraordinarily specific set of circumstances! Then you need that ridiculously specific set to include a reasonably broad but not too broad set of preferences for acausal trade in particular, along with an almost certain expectation that they actually exist in any meaningful sense that matters for your own preferences and likewise that they consider your preference class to meaningfully exist for theirs.
Then, to the extent that you believe that all of these hold and that all of the agents that you consider to meaningfully exist outside your causal influence also hold these beliefs, you can start to consider what you would like to expect in their universes more than anything you could have done with those resources in your own. The answer will almost certainly be “nothing”.
This is really weird line of reasoning, because “multiversal trading” doesn’t mean “trading with entire multiverse”, it means “finding suitable trading partner in multiverse”.
First of all, there is a very-broad-but-well-defined class of agents which humans belong to. It’s class of agents with indexical preferences. It’s likely that indexical preferences are relatively weird in multiverse, but they are simple enough to be considered in any sufficiently broad list of preferences, as certain sort of curiosity for multiversal decision theorists.
For what we know, out universe is going to end one way or another (heat death, cyclic collapse, Big Rip or something else). Because we have indexical preferences, we would like to escape universe in subjective continuity. Because, ceteris paribus, we can be provided with very small shares of reality to have subjective continuity, it creates large gains from trade for any non-indexical-caring entities.
(And if our universe is not going to end, it means that we have effectively infinite compute, therefore, we actually can perform a lot of acausal trading.)
Next, there are large restrictions on search space. As you said, we both should be able to consider each other. I think, say, considering physics in which analogs of quantum computers can solve NP-problems in polynomial time is quite feasible—we have rich theory of approximation and we are going to discover even more of it.
Another restriction is around preferences. If their preferences is something we can do, like molecular squiggles, then we should restrict ourselves to something sufficiently similar to our physics.
We can go further and restrict preferences to sufficiently concave, such that we consider broad class of agents, each of which may have some very specific hard to specify peak of utility function (like very precise molecular squiggles), but have common broad basin of good enough states (they would like to have precise molecular squiggles, but they would consider it sufficient payment if we just produce a lot of granite spheres).
Given all these restrictions, I don’t find it plausible to believe that future human-aligned superintelligences with galaxies of computronium won’t find any way to execute trade, given the incentives.
My post is almost entirely about the enormous hidden assumptions in the word “finding” within your description “finding suitable trading partner in multiverse”. The search space isn’t just so large that you need galaxies full of computronium, because that’s not even remotely close to enough. It’s almost certainly not even within an order of magnitude of the number of orders of magnitude that it takes. It’s not enough to just find one, because you need to average expected value over all of them to get any value at all.
The expected gains from every such trade are correspondingly small, even if you find some.
Addressing Problem 1: One unspoken assumption that acausal trade makes is that it only takes a “finite” amount of time to model every possible other agent and all their probabilities of occurring, while the multiverse is infinite. Therefore, if you are an agent with an infinite time horizon and zero time discount factor for your reward function, then modelling all of those probabilities becomes a worthwhile investment. (I disagree with this assumption but I have never read an Acausal Trade argument that didn’t make it). Once you assume this then it makes more sense: The agent is already winning in universe A anyway, so it slightly weakens its grip on it in order to extend its influence into other universes. In evolutionary terms: It’s spending its energy on reproduction.
Addressing Problem 2: I fully agree. However, I would also point out that just because probabilities don’t have an objective definition in these scenarios doesn’t mean that an entity won’t arise that optimizes over it anyway, out of misgeneralization. This is neither right or wrong. It’s just a thing that will happen when you have an entity that thinks in terms of probabilities and it finds out that the basis of its thought patterns (probability theory) is actually ill-defined. It’s either that or it goes mad.
If you are taking an evolutionary approach, some ideas come to mind: From a “global” perspective, the multiverse’s evolutionary winner is probably something ridiculously small that happened to arise in a universe with low complexity and therefore high runtime. It’s kind of silly to think about, but there is an ant-like entity out there that outperforms godlike AIs. You might say that doesn’t matter because that universe is causally isolated from the rest, so what do we care. But if we take this perspective then it points us in a direction that is useful for addressing problem 2 better: We care about possible universes that plausibly could be directly causally entangled with our universe even if they don’t appear so at first glance. Take this with even more grains of salt than the rest of this post, but to me it means that Acausal Trade makes sense when it is done with entities like our own hypothetical AI descendants. Those are much easier to define and understand than hypothetical mathematically abstract agents. We can think about their motivations in little time, and we can in fact simulate them directly because we are already doing it. It’s much easier to determine the source code of a hypothetical agent in another universe if you are the one who wrote the source code of its ancestor.
If we go from “Acausal Trade with the set of all possible agents” to “Acausal Trade with the set of agents we are actually likely to encounter because we already have good reasons to know them” then it becomes much less impractical.
“Reproducing in another Universe” is a tricky concept! I feel like simple beings that succeed in this manner should be thought of as memes from the perspective of Universes like A that instantiate them. Their presence in B is kind of irrelevant: maybe A instantiates the agents because of some trade in B, but A is free to place pretty much arbitrary weights on other Universes and the preferences therein. Given this ambiguity, we might as well remove one step and just say that A likes the B agent for some unstated arbitrary reason, without specific mention of trades. We could view Conway glider guns as a popular meme from the multiverse, but what use is that?
I’m reminded of Samuel Alexander’s thought experiment, in which Earth has a one-way portal to Paradise. Perhaps most people would take this portal initially; however, from the perspective of Earth’s ecosystem, entering this portal is equivalent to death. Therefore, after enough natural selection, we should expect that beings on Earth will treat the portal with the same degree of fear and avoidance as death, even if they can clearly see Paradise on the other side. Arguably, we already find ourselves in this situation with respect to our logical continuation in the blissful afterlife of many religions.
Ultimately, I feel that a multiverse trade only provides benefits in a Universe of our own imagination, which may be said to exist in some logical sense, but lacks an objective measure relative to all the other worlds that we could (or could not) have imagined. And in some of these worlds, the trade would instead be detrimental!
Which trades? I don’t think I’ve heard this. I think multiversal acausal trade is fine and valid, but my impression is it’s not important in AI safety.
I think the idea is that our Friendly ASI should cooperate with other agents in order to spread our values of happiness and non-suffering throughout the multiverse. I’m not citing anyone in particular, because I’m not sure who the leading proponents are or if I’m representing them correctly, so perhaps this is hearsay. What do you consider a more important application of multiversal acausal trade?
It’s possible you overheard me bring it up where I was making a claim about cross-everett acausal trading between acausal-trade-inclined superintelligences, it’s a thing I’ve chatted about before with people and I think may have done recently, not sure who originated the idea. it’s a seriously disappointing consolation prize but if acausal trade is easier to encode than everything else for some reason (which would already a weird thing to be true), then maybe in some worlds we get strong alignment wins and those worlds can bargain for trade with the acausal-trade-only pseudo-win worlds, maybe even for enough to keep humans around in a tiny sliver of the universe’s negentropy for a while. or something.
idk, I’m not really convinced by this argument I’m making. sometimes I think about acausal trade between noised versions of myself across timelines as a way to give myself a pep talk.
It seems easier to imagine trading across Everett branches, assuming one thinks they exist at all. They come from similar starting point but can end up very different. That reduces severity of problem 2.
Yes I think both objections are considerably weaker when the probabilities come from the physics of our actual Universe. While it’s still tricky to pin down the “correct” decision theory in this setting, quetzal_rainbow’s comment here includes a paper that might contain the answer.
I think that it’s good to think concretely about what multiverse trading actually looks like, but I think problem 1 is a red herring—Darwinian selective pressure is irrelevant where there’s only one entity, and ASIs should ensure that at least over a wide swathe of the universe there is only one entity. At the boundaries between two ASIs if defence is simpler than offense there’ll be plenty of slack for non-selective preferences.
My bigger problem is that multiverse acausal trade requires that agent A in universe 1, can simulate that universe 2 exists, with agent B, which will simulate agent A in universe 1. Which is not theoretically impossible (if for example the amount of available compute increases without bound in both universes, or if it’s possible to prove facts about the other universe without needing to simulate the whole thing), but does seem incredibly unlikely—and almost certainly not worth the cost required to attempt to search for such an agent.
Not as far as I’ve ever been able to discern.
There’s also problem 3 (or maybe it’s problem 0): the whole thing assumes that you accept that these other universes exist in any way that would make it desirable to trade with them to begin with. Tegmarkianism isn’t a given, and satisfying the preferences of something nonexistent, for the “reward” of it creating a nonexistent situation where your own preferences are satisfied, is, um, nonstandard. Even doing something like that with things bidirectionally outside of your light cone is pretty fraught, let alone things outside of your physics.
Acausal trade seems to appeal either to people who want to be moral realists but can’t quite figure out how to pull that off in any real framework, so they add epicycles… or to people who just like to make their worldviews maximally weird.
Are you proposing that the universe outside of your lightcone might (like non-negligible P) just not be real?
Not really, not for the light cone case. You could maybe make a case that it’s in some way less “real” than anything causally connected to you, but I’m willing to basically assign it reality.
I think the idea of attaching a probability to whether it’s real badly misses the point, though. That’s not necessarily the kind of proposition that has a probability. First you have to define what you mean by “real” or “exists” (and whether the two mean the same thing). It’s not obvious at all. We say that my keyboard exists, and we say that the square root of two exists, but those don’t mean the same thing… and a lot of the associations and ways of thinking around the world “real” get tangled up with causality.
But anyway, as I said, for most purposes I’m prepared to act as though stuff outside my light cone exists and/or is real, in the same way I’m willing to act as though stuff technically inside my light cone exists and/or is real, even when the causal connections between me and it are so weak as to be practically unimportant.
The problem in the “outside the light cone” trade case is more about not having any way to know how much of whatever you’re trading with is real, for any definition of real, nor what its nature may be if it is. You don’t know the extent or even the topology (or necessarily even the physical laws) of the Universe outside of your light cone. It may not be that much bigger than the light cone itself. It may even be smaller than the light cone in the future direction. Maybe you’ll have some strong hints someday, but you can’t rely on getting them. And at the moment, as far as I can tell, cosmology is totally befuddled on those issues.
And even if you have the size, you still get back to the sorts of things the original post talks about. If it’s finite you don’t know how many entities there are in it, or what proportion of them are going to “trade” with you, and if it’s infinite you don’t know the measure (assuming that you can define a measure you find satisfying). For that matter, there are also problems with things that are technically inside your light cone, but with which you can’t communicate practically.
A core element is that you expect acausal trade among far more intelligent agents, such as AGI or even ASI. As well that they’ll be using approximations.
Problem 1: There isn’t going to be much Darwinian selection pressure against a civilization that can rearrange stars and terraform planets. I’m of the opinion that it has mostly stopped mattering now, and will only matter even less over time. As long as we don’t end up in a “everyone has an AI and competes in a race to the bottom”. I don’t think it is that odd that an ASI could resist selection pressures. It operates on a faster time-scale and can apply more intelligent optimization than evolution can, towards the goal of keeping itself and whatever civilization it manages stable.
Problem 2: I find it somewhat plausible there’s some nicely sufficiently pinned down variables that can get us to a more objective measure. However, I don’t think it is needed and most presentations of this don’t go for an objective distribution.
So, to me, using a UTM that is informed by our own physics and reality is fine. This presumably results in more of a ‘trading nearby’ sense, the typical example being across branches, but in more generality. You have more information about how those nearby universes look anyway.
The downside here is that whatever true distribution there is, you’re not trading directly against it. But if it is too hard for an ASI in our universe to manage, then presumably many agents aren’t managing to acausally trade against the true distribution regardless.
I think I would make this more specific- there’s no external pressure from that other universe, sort of by definition. So for acausal trade to still work you’re left only with internal pressure.
The question becomes, “Do one’s own thoughts provide this pressure in a usefully predictable way?”
Presumably it would be have to happen necessarily, or be optimized away. Perhaps as a natural side effect of having intelligence as all, for example. Which I think would be similar in argument as, “Do natural categories exist?”