Beyond Astronomical Waste

Faced with the astronomical amount of unclaimed and unused resources in our universe, one’s first reaction is probably wonderment and anticipation, but a second reaction may be disappointment that our universe isn’t even larger or contains even more resources (such as the ability to support 3^^^3 human lifetimes or perhaps to perform an infinite amount of computation). In a previous post I suggested that the potential amount of astronomical waste in our universe seems small enough that a total utilitarian (or the total utilitarianism part of someone’s moral uncertainty) might reason that since one should have made a deal to trade away power/​resources/​influence in this universe for power/​resources/​influence in universes with much larger amounts of available resources, it would be rational to behave as if this deal was actually made. But for various reasons a total utilitarian may not buy that argument, in which case another line of thought is to look for things to care about beyond the potential astronomical waste in our universe, in other words to explore possible sources of expected value that may be much greater than what can be gained by just creating worthwhile lives in this universe.

One example of this is the possibility of escaping, or being deliberately uplifted from, a simulation that we’re in, into a much bigger or richer base universe. Or more generally, the possibility of controlling, through our decisions, the outcomes of universes with much greater computational resources than the one we’re apparently in. It seems likely that under an assumption such as Tegmark’s Mathematical Universe Hypothesis, there are many simulations of our universe running all over the multiverse, including in universes that are much richer than ours in computational resources. If such simulations exist, it also seems likely that we can leave some of them, for example through one of these mechanisms:

  1. Exploiting a flaw in the software or hardware of the computer that is running our simulation (including “natural simulations” where a very large universe happens to contain a simulation of ours without anyone intending this).

  2. Exploiting a flaw in the psychology of agents running the simulation.

  3. Altruism (or other moral/​axiological considerations) on the part of the simulators.

  4. Acausal trade.

  5. Other instrumental reasons for the simulators to let out simulated beings, such as wanting someone to talk to or play with. (Paul Christiano’s recent When is unaligned AI morally valuable? contains an example of this, however the idea there only lets us escape to another universe similar to this one.)

(Being run as a simulation in another universe isn’t necessarily the only way to control what happens in that universe. Another possibility is if universes with halting oracles exist (which is implied by Tegmark’s MUH since they exist as mathematical structures in the arithmetical hierarchy), some of their oracle queries may be questions whose answers can be controlled by our decisions, in which case we can control what happens in those universes without being simulated by them (in the sense of being run step by step in a computer). Another example is that superintelligent beings may be able to reason about what our decisions are without having to run a step by step simulation of us, even without access to a halting oracle.)

The general idea here is for a superintelligence descending from us to (after determining that this is an advisable course of action) use some fraction of the resources of this universe to reason about or search (computationally) for much bigger/​richer universes that are running us as simulations or can otherwise be controlled by us, and then determine what we need to do to maximize the expected value of the consequences of our actions on the base universes, perhaps through one or more of the above listed mechanisms.

Practical Implications

Realizing this kind of existential hope seems to require a higher level of philosophical sophistication than just preventing astronomical waste in our own universe. Compared to that problem, here we have more questions of a philosophical nature, for which no empirical feedback seems possible. It seems very easy to make a mistake somewhere along the chain of reasoning and waste a more-than-astronomical amount of potential value, for example by failing to realize the possibility of affecting bigger universes through our actions, incorrectly calculating the expected value of such a strategy, failing to solve the distributional/​ontological shift problem of how to value strange and unfamiliar processes or entities in other universes, failing to figure out the correct or optimal way to escape into or otherwise influence larger universes, etc.

The total utilitarian in me is thus very concerned about trying to preserve and improve the collective philosophical competence of our civilization, such that when it becomes possible to pursue strategies like ones listed above, we’ll be able to make the right decisions. The best opportunity to do this that I can foresee is the advent of advanced AI, which is another reason I want to push for AIs that are not just value aligned with us, but also have philosophical competence that scales with their other intellectual abilities, so they can help correct the philosophical errors of their human users (instead of merely deferring to them), thereby greatly improving our collective philosophical competence.

Anticipated Questions

How is this idea related to Nick Bostrom’s Simulation Argument? Nick’s argument focuses on the possibility of post-humans (presumably living in a universe similar to ours but just at a later date) simulating us as their ancestors. It does not seem to consider that we may be running as simulations in much larger/​richer universes, or that this may be a source of great potential value.

Isn’t this a form of Pascal’s Mugging? I’m not sure. It could be that when we figure out how to solve Pascal’s Mugging it will become clear that we shouldn’t try to leave our simulation for reasons similar to why we shouldn’t pay the mugger. However the analogy doesn’t seem so tight that I think this is highly likely. Also, note that the argument here isn’t that we should do the equivalent of “pay the mugger” but rather that we should try to bring ourselves into a position where we can definitively figure out what the right thing to do is.