Simulations Map: what is the most probable type of the simulation in which we live?

There is a chance that we may be living in a computer simulation created by an AI or a future super-civilization. The goal of the simulations map is to depict an overview of all possible simulations. It will help us to estimate the distribution of other multiple simulations inside it along with their measure and probability. This will help us to estimate the probability that we are in a simulation and – if we are – the kind of simulation it is and how it could end.

Simulation argument

The simulation map is based on Bostrom’s simulation argument. Bostrom showed that that “at least one of the following propositions is true:

(1) the human species is very likely to go extinct before reaching a “posthuman” stage;

(2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof);

(3) we are almost certainly living in a computer simulation”. http://​​www.simulation-argument.com/​​simulation.html

The third proposition is the strongest one, because (1) requires that not only human civilization but almost all other technological civilizations should go extinct before they can begin simulations, because non-human civilizations could model human ones and vice versa. This makes (1) extremely strong universal conjecture and therefore very unlikely to be true. It requires that all possible civilizations will kill themselves before they create AI, but we can hardly even imagine such a universal course. If destruction is down to dangerous physical experiments, some civilizations may live in universes with different physics; if it is down to bioweapons, some civilizations would have enough control to prevent them.

In the same way, (2) requires that all super-civilizations with AI will refrain from creating simulations, which is unlikely.

Feasibly there could be some kind of universal physical law against the creation of simulations, but such a law is impossible, because some kinds of simulations already exist, for example human dreaming. During human dreaming very precise simulations of the real world are created (which can’t be distinguished from the real world from within – that is why lucid dreams are so rare). So, we could conclude that after small genetic manipulations it is possible to create a brain that will be 10 times more capable of creating dreams than an ordinary human brain. Such a brain could be used for the creation of simulations and strong AI surely will find more effective ways of doing it. So simulations are technically possible (and qualia is no problem for them as we have qualia in dreams).

Any future strong AI (regardless of whether it is FAI or UFAI) should run at least several million simulations in order to solve the Fermi paradox and to calculate the probability of the appearance of other AIs on other planets, and their possible and most typical goal systems. AI needs this in order to calculate the probability of meeting other AIs in the Universe and the possible consequences of such meetings.

As a result a priory estimation of me being in a simulation is very high, possibly 1000000 to 1. The best chance of lowering this estimation is to find some flaws in the argument, and possible flaws are discussed below.

Most abundant classes of simulations

If we live in a simulation, we are going to be interested in knowing the kind of simulation it is. Probably we belong to the most abundant class of simulations, and to find it we need a map of all possible simulations; an attempt to create one is presented here.

There are two main reasons for simulation domination: goal and price. Some goals require the creation of very large number of simulations, so such simulations will dominate. Also cheaper and simpler simulations are more likely to be abundant.

Eitan_Zohar suggested http://​​lesswrong.com/​​r/​​discussion/​​lw/​​mh6/​​you_are_mostly_a_simulation/​​ that FAI will deliberately create an almost infinite number of simulations in order to dominate the total landscape and to ensure that most people will find themselves inside FAI controlled simulations, which will be better for them as in such simulations unbearable suffering can be excluded. (If in the infinite world an almost infinite number of FAIs exist, each of them could not change the landscape of simulation distribution, because its share in all simulations would be infinitely small. So we need a casual trade between an infinite number of FAIs to really change the proportion of simulations. I can’t say that it is impossible, but it may be difficult.)

Another possible largest subset of simulations is the one created for leisure and for the education of some kind of high level beings.

The cheapest simulations are simple, low-resolution and me-simulations (one real actor, with the rest of the world around him like a backdrop), similar to human dreams. I assume here that simulations are distributed as the same power law as planets, cars and many other things: smaller and cheaper ones are more abundant.

Simulations could also be laid on one another in so-called Matryoshka simulations where one simulated civilization is simulating other civilizations. The lowest level of any Matryoshka system will be the most populated. If it is a Matryoshka simulation, which consists of historical simulations, the simulation levels in it will be in descending time order, for example the 24th century civilization models the 23rd century one, which in turn models the 22nd century one, which itself models the 21st century simulation. A simulation in a Matryoshka will end on the level where creation of the next level is impossible. The beginning of 21st century simulations will be the most abundant class in Matryoshka simulations (similar to our time period.)

Argument against simulation theory

There are several possible objections against the Simulation argument, but I find them not strong enough to do it.

1. Measure

The idea of measure was introduced to quantify the extent of the existence of something, mainly in quantum universe theories. While we don’t know how to actually measure “the measure”, the idea is based on intuition that different observers have different powers of existence, and as a result I could find myself to be one of them with a different probability. For example, if we have three functional copies of me, one of them is the real person, another is a hi-res simulation and the third one is low-res simulation, are my chances of being each of them equal (1/​3)?

The «measure» concept is the most fragile element of all simulation arguments. It is based mostly on the idea that all copies have equal measure. But perhaps measure also depends on the energy of calculations. If we have a computer which is using 10 watts of energy to calculate an observer, it may be presented as two parallel computers which are using five watts each. These observers may be divided again until we reach the minimum amount of energy required for calculations, which could be called «Plank observer». In this case our initial 10 watt computer will be equal to – for example – one billion plank observers.

And here we see a great difference in the case of simulations, because simulation creators have to spend less energy on calculations (or it would be easier to make real world experiments). But in this case such simulations will have a lower measure. But if the total number of all simulations is large, then the total measure of all simulations will still be higher than the measure of real worlds. But if most real worlds end with global catastrophe, the result would be an even higher proportion of real worlds which could outweigh simulations after all.

2. Universal AI catastrophe

One possible universal global catastrophe could happen where a civilization develops an AI-overlord, but any AI will meet some kind of unresolvable math and philosophical problems which will terminate it at its early stages, before it can create many simulations. See an overview of this type of problem in my map “AI failures level”.

3. Universal ethics

Another idea is that all AIs converge to some kind of ethics and decision theory which prevent them from creating simulations, or they create p-zombie simulations only. I am skeptical about that.

4. Infinity problems

If everything possible exists or if the universe is infinite (which are equal statements) the proportion between two infinite sets is meaningless. We could overcome this conjecture using the idea of mathematical limit: if we take a bigger universe and longer periods of time, the simulations will be more and more abundant within them.

But in all cases, in the infinite universe any world exists an infinite number of times, and this means that my copies exist in real worlds an infinite number of times, regardless of whether I am in a simulation or not.

5. Non-uniform measure over Universe (actuality)

Contemporary physics is based on the idea that everything that exists, exists in equal sense, meaning that the Sun and very remote stars have the same measure of existence, even in casually separated regions of the universe. But if our region of space-time is somehow more real, it may change simulation distribution which will favor real worlds.

6. Flux universe

The same copies of me exist in many different real and simulated worlds. In simple form it means that the notion that “I am in one specific world” is meaningless, but the distribution of different interpretations of the world is reflected in the probabilities of different events.

E.g. the higher the chances that I am in a simulation, the bigger the probability that I will experience some kind of miracles during my lifetime. (Many miracles almost prove that you are in simulation, like flying in dreams.) But here correlation is not causation.

The stronger version of the same principle implies that I am one in many different worlds, and I could manipulate the probability of finding myself in a set of possible worlds, basically by forgetting who I am and becoming equal to a larger set of observers. It may work without any new physics, it only requires changing the number of similar observers, and if such observers are Turing computer programs, they could manipulate their own numbers quite easily.

Higher levels of flux theory do require new physics or at least quantum mechanics in a many worlds interpretation. In it different interpretations of the world outside of the observer could interact with each other or experience some kind of interference.

See further discussion about a flux universe here: http://​​lesswrong.com/​​lw/​​mgd/​​the_consequences_of_dust_theory/​​

7. Bolzmann brains outweigh simulations

It may turn out that BBs outweigh both real worlds and simulations. This may not be a problem from a planning point of view because most BBs correspond to some real copies of me.

But if we take this approach to solve the BBs problem, we will have to use it in the simulation problem as well, meaning: “I am not in a simulation because for any simulation, there exists a real world with the same “me”. It is counterintuitive.

Simulation and global risks

Simulations may be switched off or may simulate worlds which are near global catastrophe. Such worlds may be of special interest for future AI because they help to model the Fermi paradox and they are good for use as games.

Miracles in simulations

The map also has blocks about types of simulation hosts, about many level simulations, plus ethics and miracles in simulations.

The main point about simulation is that it disturbs the random distribution of observers. In the real world I would find myself in mediocre situations, but simulations are more focused on special events and miracles (think about movies, dreams and novels). The more interesting my life is, the less chance that it is real.

If we are in simulation we should expect more global risks, strange events and miracles, so being in a simulation is changing our probability expectation of different occurrences.

This map is parallel to the Doomsday argument map.

Estimations given in the map of the number of different types of simulation or required flops are more like place holders, and may be several orders of magnitude higher or lower.

I think that this map is rather preliminary and its main conclusions may be updated many times.

The pdf of the map is here, and jpg is below.

Previous posts with maps:

Digital Immortality Map

Doomsday Argument Map

AGI Safety Solutions Map

A map: AI failures modes and levels

A Roadmap: How to Survive the End of the Universe

A map: Typology of human extinction risks

Roadmap: Plan of Action to Prevent Human Extinction Risks

Immortality Roadmap