There are more possible agents than atom-femtoseconds in the universe (to put it mildly), so if you devote even one femtosecond of one atom to modelling the desires of any given acausal agent then you are massively over-representing that agent.
The best that is possible is some sort of averaged distribution, and even then it’s only worth modelling agents capable of conducting acausal trade with you—but not you in particular. Just you in the sense of an enormously broad reference class in which you might be placed by agents like them.
Given even an extremely weak form of orthogonality thesis, the net contribution of your entire reference class will be as close to zero as makes no difference—not even enough to affect one atom (or some other insignificantly small equivalent in other physics). If orthogonality doesn’t hold even slightly, then you already know that your desires are reflected in the other reference classes and acausal trade is irrelevant.
So the only case that is left is one in which you know that orthogonality almost completely fails, and there are only (say) 10^1 to 10^30 or so reasonably plausible sets of preferences for sufficiently intelligent agents instead of the more intuitively expected 10^10000000000000000000 or more. This is an extraordinarily specific set of circumstances! Then you need that ridiculously specific set to include a reasonably broad but not too broad set of preferences for acausal trade in particular, along with an almost certain expectation that they actually exist in any meaningful sense that matters for your own preferences and likewise that they consider your preference class to meaningfully exist for theirs.
Then, to the extent that you believe that all of these hold and that all of the agents that you consider to meaningfully exist outside your causal influence also hold these beliefs, you can start to consider what you would like to expect in their universes more than anything you could have done with those resources in your own. The answer will almost certainly be “nothing”.
This is really weird line of reasoning, because “multiversal trading” doesn’t mean “trading with entire multiverse”, it means “finding suitable trading partner in multiverse”.
First of all, there is a very-broad-but-well-defined class of agents which humans belong to. It’s class of agents with indexical preferences. It’s likely that indexical preferences are relatively weird in multiverse, but they are simple enough to be considered in any sufficiently broad list of preferences, as certain sort of curiosity for multiversal decision theorists.
For what we know, out universe is going to end one way or another (heat death, cyclic collapse, Big Rip or something else). Because we have indexical preferences, we would like to escape universe in subjective continuity. Because, ceteris paribus, we can be provided with very small shares of reality to have subjective continuity, it creates large gains from trade for any non-indexical-caring entities.
(And if our universe is not going to end, it means that we have effectively infinite compute, therefore, we actually can perform a lot of acausal trading.)
Next, there are large restrictions on search space. As you said, we both should be able to consider each other. I think, say, considering physics in which analogs of quantum computers can solve NP-problems in polynomial time is quite feasible—we have rich theory of approximation and we are going to discover even more of it.
Another restriction is around preferences. If their preferences is something we can do, like molecular squiggles, then we should restrict ourselves to something sufficiently similar to our physics.
We can go further and restrict preferences to sufficiently concave, such that we consider broad class of agents, each of which may have some very specific hard to specify peak of utility function (like very precise molecular squiggles), but have common broad basin of good enough states (they would like to have precise molecular squiggles, but they would consider it sufficient payment if we just produce a lot of granite spheres).
Given all these restrictions, I don’t find it plausible to believe that future human-aligned superintelligences with galaxies of computronium won’t find any way to execute trade, given the incentives.
My post is almost entirely about the enormous hidden assumptions in the word “finding” within your description “finding suitable trading partner in multiverse”. The search space isn’t just so large that you need galaxies full of computronium, because that’s not even remotely close to enough. It’s almost certainly not even within an order of magnitude of the number of orders of magnitude that it takes. It’s not enough to just find one, because you need to average expected value over all of them to get any value at all.
The expected gains from every such trade are correspondingly small, even if you find some.
Acausal trades almost certainly don’t work.
There are more possible agents than atom-femtoseconds in the universe (to put it mildly), so if you devote even one femtosecond of one atom to modelling the desires of any given acausal agent then you are massively over-representing that agent.
The best that is possible is some sort of averaged distribution, and even then it’s only worth modelling agents capable of conducting acausal trade with you—but not you in particular. Just you in the sense of an enormously broad reference class in which you might be placed by agents like them.
Given even an extremely weak form of orthogonality thesis, the net contribution of your entire reference class will be as close to zero as makes no difference—not even enough to affect one atom (or some other insignificantly small equivalent in other physics). If orthogonality doesn’t hold even slightly, then you already know that your desires are reflected in the other reference classes and acausal trade is irrelevant.
So the only case that is left is one in which you know that orthogonality almost completely fails, and there are only (say) 10^1 to 10^30 or so reasonably plausible sets of preferences for sufficiently intelligent agents instead of the more intuitively expected 10^10000000000000000000 or more. This is an extraordinarily specific set of circumstances! Then you need that ridiculously specific set to include a reasonably broad but not too broad set of preferences for acausal trade in particular, along with an almost certain expectation that they actually exist in any meaningful sense that matters for your own preferences and likewise that they consider your preference class to meaningfully exist for theirs.
Then, to the extent that you believe that all of these hold and that all of the agents that you consider to meaningfully exist outside your causal influence also hold these beliefs, you can start to consider what you would like to expect in their universes more than anything you could have done with those resources in your own. The answer will almost certainly be “nothing”.
This is really weird line of reasoning, because “multiversal trading” doesn’t mean “trading with entire multiverse”, it means “finding suitable trading partner in multiverse”.
First of all, there is a very-broad-but-well-defined class of agents which humans belong to. It’s class of agents with indexical preferences. It’s likely that indexical preferences are relatively weird in multiverse, but they are simple enough to be considered in any sufficiently broad list of preferences, as certain sort of curiosity for multiversal decision theorists.
For what we know, out universe is going to end one way or another (heat death, cyclic collapse, Big Rip or something else). Because we have indexical preferences, we would like to escape universe in subjective continuity. Because, ceteris paribus, we can be provided with very small shares of reality to have subjective continuity, it creates large gains from trade for any non-indexical-caring entities.
(And if our universe is not going to end, it means that we have effectively infinite compute, therefore, we actually can perform a lot of acausal trading.)
Next, there are large restrictions on search space. As you said, we both should be able to consider each other. I think, say, considering physics in which analogs of quantum computers can solve NP-problems in polynomial time is quite feasible—we have rich theory of approximation and we are going to discover even more of it.
Another restriction is around preferences. If their preferences is something we can do, like molecular squiggles, then we should restrict ourselves to something sufficiently similar to our physics.
We can go further and restrict preferences to sufficiently concave, such that we consider broad class of agents, each of which may have some very specific hard to specify peak of utility function (like very precise molecular squiggles), but have common broad basin of good enough states (they would like to have precise molecular squiggles, but they would consider it sufficient payment if we just produce a lot of granite spheres).
Given all these restrictions, I don’t find it plausible to believe that future human-aligned superintelligences with galaxies of computronium won’t find any way to execute trade, given the incentives.
My post is almost entirely about the enormous hidden assumptions in the word “finding” within your description “finding suitable trading partner in multiverse”. The search space isn’t just so large that you need galaxies full of computronium, because that’s not even remotely close to enough. It’s almost certainly not even within an order of magnitude of the number of orders of magnitude that it takes. It’s not enough to just find one, because you need to average expected value over all of them to get any value at all.
The expected gains from every such trade are correspondingly small, even if you find some.