Physicalism implies experience never dies. So what am I going to experience after it does?

If there are no computer simulations

What will happen to me after I die? what is the next moment I am most likely to feel? In what reality will I be? These questions matter when the experience seems immortal. If you haven’t come across the idea that if we are a way of feeling information processed in a specific way (computationalism or any similar theory explaining the emergence of consciousness), then we exist wherever some system “recreates” us. We can feel something /​ ourselves only in the existing futures. Our subjectivity will never die because there is always some future, no matter how improbable and abstract. Link to the explanation: https://​​​​2018/​​06/​​01/​​consciousness-is-forever-2/​​ by Mario Alejandro Montano. If you already know this view, I have put together my thoughts on possible futures that I found likely in some way. Forgive me if I express myself unclearly or I cannot describe something precisely, using very strict terms, Unfortunately, I had no real opportunity to learn to communicate fluently in English.

It depends directly on my measure, on what part of the systems recreating my existence will have what future. My measure decreases with each death, in fact, with each passing moment, though perhaps if something like the Youngness paradox is a reality, it does increase. But no, the universe is eternal, time is an emergent non-fundamental property, and everything exists the same, past or future. Even if the number of systems recreating me increases, they are mostly “me” young in eternal inflation. Unless so many older “me” are simulated that my measure will increase.

Maybe the measure should decrease as a result of entropy, and we should exist as states with the greatest possible measure at any time? “Now” is a moment of the “greatest” measure possible, a moment later too, and always so. In this way, the greatest measure of “us” would always be in what we perceive as the past. In fact, we would be moving subjectively against the flow of measure, because from the outside there would always be fewer systems that perform us, and we can only subjectively feel the next moment, remembering the previous one, not the other way around. This is how time must work, we cannot remember the past. Thus, having only or statistically more information, a more complicated informationally mind, or existing in more complex information systems that can re-produce us, we can only decrease in measure.

Can I increase the measure? and if so, how? By simplifying the information system or one’s own mind, by erasing information (i.e. knowledge, memories, experiences) from it, subjectively in a certain sense, it happens with every extinction of consciousness, sleep, fainting, death. But because systems wake up from sleep and not from death, not in the same place, and not as the same set of atoms, the system that recreates me collapses, even though there are other systems that will continue to observe and to experience.

The second way, after simplification, would be to make multiple copies of the information processing in a computer simulation, recreating the sequence or strings of information that make up me. Probably it would have to be represented by quantum computers to simulate all the copies from the parallel branches of the wave function. With current computers, each of these branches would have to be presented and simulated separately, which would be a waste of computing power. I assume that quantum simulations will be those presented in the future, so they are closer to the non-simulated reality. (Perhaps they would even be so in a technical sense, if the universe itself acts as a simulation, as a computation applied on a system of fundamental laws.)

Boltzmann’s brains, if many of them arise in the inflationary multiverse, cannot at first glance increase the measure of my specific states or the future, for they are random and not focused on duplicating any particular state or history. In fact, the most common structures created as fluctuations are very uncomplicated, so the more complicated the system, the less probability of its formation, probably orders of magnitude smaller. So you might think that brains with one less memory will be much more common than brains with one more memory. It is not ultimately clear to me whether or not future minds need to be more information complexity, but I am under the impression that they are rather so because of the increasing amount of information. In particular, information that is memorized and, above all, forms part of our identity, updating it and making it more complex.

It may be that if simulations are not possible, Boltzmann’s brains will turn out to be many more of our future measure. If even transhumanism would develop and beings from other planets would create physical brains at random to save at least some of the dead, perhaps or perhaps for sure, while the superintelligence destined to create such (physical) brains may be otherwise, the Boltzmann carriers of our future states exist in such quantity that they form millions of times the denser part of the future among our measure. It is enough for a static universe to exist with one brain per Hubble volume, without time and without any movement or change. If there were brains similar to each other in information, with a similar arrangement of atoms or wave functions, such that IF planck time had elapsed, then this static brain would become informatively identical to another existing cosmic finiteness beyond, in such a static universe without time THERE would BE time and observers, consciousness and experiences, at least such a view seems quite logical and intuitive to me. There is even no need for real movement or information flow for information to feel as though it is flowing. So if there was a sequence of brains emulating consciousness in exactly the planck time, so that one arises in one world and disappears, the other in another world in a billion years and disappears, the third—the same, and so on for any length of time, despite the lack of physical flow of information there would be a mind feeling it. Is it not so now? What if you disappeared and reappeared a billion years later, - from the outside, you would cease to exist, yet subjectively nothing would have happened. The order of the sequences wouldn’t even matter, future brains could arise first, past brains after them, or whatever at random. Information subjectively has to create a coherent experience, because incompatible moments cannot simply follow each other subjectively or continue subjectively. This, as it were, happens, as long as the impression of us exists as an emergent property of information flow, this information jumps in Planck times. Reality, according to our best models, is not continuous, but discrete.

The same would happen if two people, or even all people, alternately were simulated in one system, by a Planck time each. The same might have been the feeling of being Boltzmann’s brains, though due to the randomness of the experiences they felt, without the apparently clear determinant other than simplicity, that it would be a state of minimal subjective awareness. It takes memories and identity to maintain the form of a person, and being Boltzmann’s brains in my imagination would be a dream, if more aware, an abstract and logicless cosmic trip.

A mystic of cosmicism would envision a dream as being Boltzmann’s brains. Not being them AS a dream, but he would interpret the dream itself as a state where Boltzmann’s brains are more than usual part of our measure. Wouldn’t being them be like a dream, a billion-year dream after which we would wake up in the laboratory of aliens or posthumans who created our brain from scraps of matter as part of a random experiment? Or in simulation?

There is no indication that simulating minds is impossible. On the contrary, we can even think of ourselves as a simulation, of an experience simulated by neuronal systems. Even ordinary, let alone quantum, computing systems should be able to perfectly simulate the state in which we are, or whenever we find ourselves, with the appropriate computational power.

But what if there is very little or no simulation (because they are impossible after all, I will focus here on the “very little” option)? Regardless of whether a larger measure of our future will die of old age, in an accident, murder, catastrophe or suicide, I imagine that there would be fewer Boltzmann brains with resurrection experiences than actual survivors. So we would most likely wake up in a critical state with broken arms and spines. Probably depending on the probability of death, our bodies would be in such damaged state. The view of living from now in such a limiting way is a tragedy for most imaginable beings capable of feeling tragedy. If we are more likely to exist as informatively more like copies of ourselves, it seems that our minds should not be more affected when it comes to damage of cognition. I find it less likely to be a severely handicapped person after an accident than to survive with broken limbs or spine but with an undamaged mind. How strongly the complexity of the mind to be our continuation is the attractor of our subjective being is unclear to me, but it seems that being more likely a future version of the mind with no lower mental faculties is at least a significant factor.

Our bodies, however, would be paralyzed, decayed, or at least severely damaged. Many of them would die shortly after the first resurrection, reducing the measure of our copies probably by an order of magnitude. After the third and subsequent death, by organ failure or euthanasia, our unsimulated measure would be so slight that even with a minimal number of our simulated copies, it would be more likely to continue to exist in them. If there is no simulation, waking up as a randomly recreated cluster of memories in a brain made by humans, posthumans, aliens, or postcivilization superintelligence seems to be a realistic scenario, at least to me.

If there is no simulation, cryonics might be a good idea, argues Alexey Turchin in a similar article. The preservation of our brains may allow advanced technology, more likely, to restore us and similar informational versions to a greater extent. Perhaps we would all be recreated by future civilizations anyway, perhaps to enlighten us and fulfill our desires, or to altruistically remove the lack of fulfillment. Perhaps we would increase the chances that the Roko basilisks and similar beings would not be what would make our brains out of nothingness first and of a greater measure.

If there are computer simulations.

But, given that perfect computer simulations of our sensations are possible, we may place a greater probability that a large part of our measure exists this way, and perhaps even that a small part of our measure does not exist in computer simulations, in a virtual form at all.

Nick Bostrom constructed the trilemma: (1) no or almost no civilization has the ability to create simulations, so simulations do not exist or hardly exist (2) civilizations have the ability to simulate other minds, but never do, at least never of an ancestral kind, and (3) civilizations have the ability to simulate other minds and they do it.He showed that if there are civilizations that simulate states of mind, most likely the number of these simulated states exceeds the number of non-simulated ones many times over. Bostrom focused on creating and maintaining ancestral simulations, but the argument holds true if we set the goal of simulating any other people-like beings, that may be widely (or even very rarely, but resulting in a very large number of entities) valued by civilizations. Even if only a tiny fraction of all technological civilizations had attained the capacity to simulate other minds, and /​ or even if only a fraction of them considered it desirable to create one, the computational possibilities of matter in practice seem limitless.

In one of the works on this subject, the final limit of computing power was set at 5.4258 * 10 ^ 50 operations per second for 1 kg of matter. It is estimated that we need 36.8 * 10 ^ 15 operations per second to simulate the human mind in real-time, currently, the largest supercomputers are capable of performance in the order of 10 ^ 18 flops. To simulate 7 billion people in real-time, that’s 257.6 * 10 ^ 24. About 4 × 10 ^ 48 operations per second are the estimated capabilities of the matryoshka brain surrounding the sun, the outermost brain of which operates at 10 Kelvin. There is a greater difference between 10 ^ 24 and 10 ^ 48 than between the size of our bodies and the size of the observable universe.

Thus, in the universe, when considering seriously the simulation hypothesis, we face the trilemma: (1) either no civilization can simulate minds because it is impossible or because of enigmatic great civilization filters annihilating all, (2) or none, or almost none, where almost no would have to be an infinitesimal part of civilization as a whole, civilization does not choose to simulate similar to ours minds because of profitability or immorality of that, or for any other reason guiding their specific brains, or, the third option, civilizations can create and maintain people-like simulations using unlimited computational power, and they do so .

I do not consider as plausible a scenario in which no civilization succeeds in simulating minds, nothing physically seems to prevent either their creation or be the reason why any civilization would not live long enough to actualize such a possibility. One might speculate that an extremely highly developed anti-simulation civilization would not allow this, but it would have to be a very common trend among civilizations throughout the universe. Since it already seems that our existence is in fact a simulation inside the brain itself, nothing prevents the creation of artificial brains or virtual brains, as long as the flow of information follows the same pattern, it will be perceived as a subjective being.

By default, I also see no reason to stop civilizations from exercising their abilities when they already have such power, but I see many reasons why such creating almost immeasurable possibilities scenario could be realized. For this reason, I believed* that the third is overwhelmingly more probable, for those chained to the classical image of the world, absurd and invariably exotic abstract, in my opinion the logically most realistic concept that our mind is simulated in almost 100% of the cases of its occurrence.

*{In the first two cases, the probability that we live in a simulation is equal to or close to zero (in the infinite universe it is never equal, unless simulating the mind would be impossible), otherwise the number of simulated entities, simulated impressions is most likely many orders of magnitude greater than these natural, therefore the probability of being a simulation is approximately 100%. therefore, almost 100% of the minds that feel my subjective moment of observation, that is, my instantaneous impression of my entire emergent being, are simulated minds. I do not take into account Boltzmann’s brains and the seemingly physics of the infinitely improbable worlds of the Big World, which according to some estimates may exceed natural self-existent minds if almost 100% of this majority of the multiverse can produce them and our type universes are almost negligibly rare. It would only change that natural self-existence would not now be the second, least likely option, and a third, still the least likely}. At least that is my first impression, it was for there is no universal reason why civilizations should not use their powers to create minds.

However, I believe there is a very good universal reason why civilizations should not simulate beings similar to us. Moreover, I believe there is an overwhelmingly compelling reason why any sufficiently post-biological beings should focus on creating only one type of simulation that would in no way resemble the existence we are observing. I believe that the vision of the singularity, achieving faster and faster, at an exponential rate of successive levels of development and intelligence of both AI and people after achieving transhumanism will not lead us to a diverse world of various individual paradises, but rather to a collection of highest experiences, very similar to each other, to become mystical, enlightened post-minds, having spontaneously abandoned most of the present visions of paradises or lives. Why spend simulated lives exploring the cosmos and incarnating as aliens or experiencing emotional lives like games when you can immediately rise to the heights of existence, a state where all understanding and feeling of everything will be so divine that any embodied physical form of the previous us wouldn’t be able to grasp it, dreaming of Avatar’s jungles or traveling through the stars? The greatest dreams of any human or transhuman would become for the ultimate creatures childisch needs for ice cream and the best set of toys. Our limited brains for now tend to imagine a literally colorful and diverse future, but as we gain intelligence and opportunity, why should we base our fulfillment on something else than an unchanging, absolute fulfillment?

Why not fulfill your desires as soon as possible, why not ascend your minds together with the singularity, in a fraction of a century, to the greatest heights of existence? Why, like animals that we have always been, satisfy new desires and needs that appear in our heads? I do not see such a need. Since you can understand everything with maximum intelligence, get rid of unnecessary desires and feel cosmic fulfillment aware of everything that can be, why not become such highest minds right away?

Why should such a state even involve mystical ecstasy and stellar bliss? After all, I see no reason why we should drool over our omniscience out of our orgasmic happiness, stoned like on heroin. I don’t see why happiness should be felt in such a physical, neurotransmitter-conditioned way. When I speak of fulfillment, I do not mean happiness, happiness and joy are all too easy to imagine as impermanent and dependent on many factors. When I write about fulfillment I mean just this, the kind of the most exalted nirvana devoid of the needs and desires of beings. Just as science fiction writers once imagined planets colonized by humans in spacesuits and the colonization of the physical cosmos, now these visions have become obsolete. Nobody will live on other planets in human form, until the colonization of the cosmos, the bodies will take the form of a freely designed being, it does not even have to be biological. Why would anyone even deal with the colonization of the physical cosmos, when the infinite possibilities of the virtual world, opal abysses of freely imagined universes open up before us if we only create computers designed to simulate them? And even then, would you really need to feel each of the billions of wonderful moments and stories? Could there really be a million pleasures of eating a crisp or feeling the sun’s rays on your face anyhow equal to the ecstasy of even taking a post-ape, poor hominid to the subjective heights of heavenly illumination? In the midst of a sea of ​​shiny plastic, would we not reach for the transcendence diamond floating above it, even aware that we will not be able to touch the glitter? You don’t even have to give up anything, computing power should be enough for anyone lucky enough to experience the singularity for trillions of subjective years to play with glitter, and after that to spent the rest of infinity only being the highest, most exalted being, the pinnacle of fulfillment dissolving in nirvana.

However, this is not a convincing vision for me. Why should we primitively value the little pleasures on the way to the great, the greatest? Actually, why against all nature should we do it, because animals strive for greater pleasure right away. For the dubious idea that the path to the goal counts, not the goal, and it is the path that gives sense, would we put the final goal beyond our reach for an additional quadrillion years, a fraction of the expected time spent and so in existence in the best of the states? After all, the memories of a quadrillion years could be inscribed in themselves just like all the joys on the way to the ultimate goal.

So I imagine the ascendence in the singularity occurs rapidly, like the singularity itself, without the need to create elaborate paradises as amusement parks for the under-enlightened post-humans. As soon as there is technology and superintelligence to get us closer to these peaks, the explosion of new technologies and superintelligences in the last revolution of mankind or any other civilization, if it does not destroy or create a cosmic monster within that short period, will allow us to reach this peak. It will be the final creation of God and the creation of the End, when we ourselves become the maximum enlightened gods. These would be the final simulations. We certainly do not exist in such a way.

What would be the purpose of such beings? Since the goal would not be to feel the next states out of curiosity or pleasure. For there is no greater pleasure and no curiosity for all-conscious beings. I do not hang on to the need to imagine any needs and desires in existence in such a state, actually by definition a state of fulfillment is a state of lack thereof. So whether the hominids like it or not, I see no reason why the ultimate state of beings’ pursuit should not be a state of utterly indifferent nirvana, of complete, dispassionate fulfillment, without the need to feel various stimuli or learn more, since we already know everything. How complicated would such the most aware dreaming minds be? I don’t know, perhaps even the complexity and awareness would not have to be important as long as they were at the level of continuing fulfillment. In Eastern religions and among mystics, the pinnacle of spirituality is to merge with the deity, to be one with the others and the cosmos. How much it is possible to merge with other brains and exist as a Cosmic Unity, so much will it be realized if such a vision is really the best. So I do not expect or imagine discovering the world and realizing myself piece by piece, but almost immediate rising to the final state of (for lack of a better word) “nirvana”.

And why should only those beings that existed on the planet of singularities be ascended, saved from a life full of potential pain, needs, and desires? Wouldn’t it be possible to save the rest of the cosmos, even casually, if you were already a god? Wouldn’t it be all the more necessary to do so in a cosmos to some extent full of almost infinite suffering?

Alexey Turchin, Mario Alejandro Montano, myself and probably many other people navigating in the darkness of this incomprehensible unpopular subject understood that if we are models in processed information, then subjectively immortal in a sufficiently vast universe in which each scenario of our informational existence is realized, and if our future state depends on the number, measure of systems in which information will be processed in a specific way, therefore more likely future, subjectively and informatively future states, the infinite number of which is greater, can be saved from torture, pain, fear, despair and eternal unfulfilled existence of everyone. Every being, every mind, every subjective impression that we are, which are all slaves, martyrs, minds simulated in hell or prehistoric animals torn apart, representatives of other civilizations and any other suffering beings of all past and future can be saved. Besides, the cosmos is not divided into the past and the future, Eternalism is implied by the theory of relativity, there is no universal time, the past has not passed and the future is not yet non-existent. Everything exists the same, eternal and unchanging, it is only information that feels time. People haven’t been skinned in the past, people are skinned, just we perceive it as it were in our past. They always exist just as real with their frame of reference. We are most likely to be in the informational state most similar to ours today, of those with the greatest measure. Thus, by increasing the measure of a moment, we can change the subjective future, or rather the probability of a given future for any state. By simulating a thousand or a million identical states, suffering or about to experience excruciating suffering, and simulating their future so that, within seconds, suffering turns into bliss and peace, it therefore seems theoretically possible.

Salvation by changing the probability of subsequent moments by simulating their enormous measure could apply to any being. The benevolent superintelligence of post-civilizations could make this their sole target. It is, in fact, the only ultimate goal worthy of an eternally continued existence. Anyway, immediately after being saved from suffering, one could turn such a mind into a state of ultimate fulfillment. The question for me is not why this should be done, but what the difficulties or impediments might be in doing so. Why should not the salvation of all beings, the salvation of all feelers from real and potential suffering in an immortal existence be the only priority of any superintelligent being?

Perhaps the very fact that we exist here and now, and not in a state of ultimate transcendence, is proof that something like this does not happen for some reason. Who knows if it’s not like Mario Alejandro Montano says in Savior imperative 33 (YT) that we have failed. Apart from the options “almost nobody will ever do anything about it” and “it’s not possible”, there is also the option that there is not enough computing power to save us all or save us now. This can be interpreted in terms of impossibility, but other than total, partial impossibility, so it’s option 2.5.

Surely, if we were transcendent, ultimate versions of experiences, we would not feel it the way we feel our existence now. We are not liberated, we are not saved, but we do not have to be lost. I see no reason why the realization of the salvation scenario would be impossible, I would rather focus on difficulties, perhaps on the limitations of the very computing possibilities of the universe. Benevolent Superintelligence would probably turn the galaxies into a computronium to the maximum extent possible, simulating as many entities as possible. The scenarios why this is not happening that come to my mind are arranged more or less like this:

Why we are not (yet) saved?

Perhaps for some reason the civilizations and superintelligence that make up such all-altruistic simulations are rather, exceedingly or rare enough to a substantial fraction of beings was actually saved. It would be possible that they arise so rarely that our chances of being simulately saved are minimal, at least in the absence of excruciating suffering or a very reduced measure. Perhaps civilizations destroy themselves by wiping off planets or creating space monsters, much more often than they manage to create a friendly superintelligence. And perhaps post-humans are not altruists at all, and the fulfillment of their desires is what is the sole purpose, all computing power is used to create havens for them, perhaps in the form of postulated nirvana, or perhaps in some other form. It is possible that evil, creator-centric, or indifferent super-intelligences capable of simulating hells arise relatively more frequently and earlier, preventing the simulation of paradises for the sake of an idea, or preventing them from using the potential material resources they need to pursue their own goals (such as maximizing the number of paperclips) , perhaps threatening to create hells by the way—to discourage good AI or achieve other instrumental goals.

The above version is a more pessimistic version (the most pessimistic of the visions in which what I postulate is possible seems to be one in which the most commonly used algorithm does not really work, so it does not save anyone, because superintelligence is wrong—because the laws of the universe work like this. that most often he must be wrong), Not only should we not rather expect salvation from suffering or subsequent deaths, perhaps our chances of ending up in an eternally simulated worst hell are even higher than the possibility of salvation.

The more optimistic, or at least a hopeful scenario, is that in which these salvation mechanisms are not uncommon in the future of civilization, but it is not the immediate salvation of all due to the unlimited computational capacity of the universe. In fact, the number of subjectively distinguishable and thus non-identical states of mind, patterns in the processing of information that creates conscious sensations, is very large, perhaps millions of orders of magnitude greater than the number of photons in our Hubble volume. Symbolically assuming that my mind produces 12 subjectively distinguishable moments of observation per second, and each of them has two different possible moments that follow, after a second the wave function of the universe already contains over 4,000 minimally different versions of “me”, this number increases exponentially. After a minute, we have 2 ^ 720 versions, or 5.5 * 10 ^ 216, when there are about 10 ^ 90 photons in the observable cosmos. After 80 years of conscious human life, or 71 (without sleep), the number of versions in the wave function is 2 raised to the power of 23 million. Even without the universal wave function, they exist somewhere in an infinite universe.

Edit: this is probably completely wrong because the number of momentary experiences does not grow exponentially, it just doubles every step if we assume two steps can lead to the same outcome. In that situation, we have 24 experiences after 1 second, and only 53 billion after 71 subjective years. The number of possible experience moments is still huge, but way smaller. Everything here about that first approximation is wrong, (but I just didn’t want to delete that part, partly because the first person who will read it will be probably future me anyway)

What is certain is that most of these scenarios have a minimal measure, probably only a fraction of a fraction makes a significant contribution as far as my future is concerned. Presumably, states of mind also merge, forgetting hundreds of thousands of irrelevant stimuli every moment. Nevertheless, the number of all possible sentient states of mind is very large. I would cautiously say difficult to overestimate even by any current computer.
If I were skinned or had limbs cut off, after a second there would be branches of the wave function, which to save, even with a low probability of their occurrence (they are always realized, I have a lower probability of feeling them). But here even a smaller measure would be much, much more important than saving me from stressful classes with an extremely unpleasant professor. In the case of pain, the subjective time course may be extended, although the feeling may not be intellectually varied. I can see several ways to prioritize when it is impossible to simulate each one in a large enough measure.

As above, it is possible to focus on saving mainly or only entities in great pain. Sooner or later, in our subjective future, there will probably be enormous suffering, certainly very likely in those futures in which we experience tragic damage to our body. Superintelligence could therefore simulate as a priority only the most suffering beings, perhaps those simulated in cosmic hells by demons like the Roko Basilisk or AI with “I have no mouth but I must scream”. then the salvation of such as us would not even be realized at all. in that case, perhaps ultimately salvation would not be a very common future, replaced by momentary salvations- from only intense pains -we couldn’t be even aware of.

If the goal were to devote the maximum possible computing power to reducing suffering, it might not be profitable (wherever I use the word here, it is in context and means cost-effective in terms of maximum suffering reduction, computationally profitable, morally profitable) simulating ultimately fulfilled minds. I don’t know how complicated or simplified our final maximum desirable states could be, but they probably would have to be fully conscious, so simulating them, even in a slow time and with a very limited range of stimuli, would take a lot of computing power. So it seems to me that the all-merciful AI would not focus on minimizing any non-cosmic suffering, so it would not simulate anything that would not be subjected anymore or that would not be subjected to enormous, perhaps only almost infinite suffering. The algorithm would create multiple copies of the moment just before suffering, and then reduce the feeling of suffering as quickly as possible, or lead to its end or avoidance. But there would be no further simulation and enlightenment, there would be no transcendence, the being saved from immense pain would now be returned to reality, the measure of its simulated copies would decrease rapidly. Perhaps only this way of reducing suffering can be implemented.

Another idea is to simulate only the final stages of lives, those in which the measure is already very low, perhaps only after the second or subsequent death. The measure of the mind could be increased with limited awareness during the moment of death, or the moment before death itself, while the mind is still fully aware. Then quickly making it a transcendent, final state could be something naturally progressive. It is also possible that in the case of saving only cosmic suffering, there are no other civilizations that save in a different way, e.g. just when the future measure reduces the mass below some level that can be effectively computed. a greater measure of non-simulated (or simulated in a different way than in a final way) worlds, as well as saving beings after a few deaths, when the measure is already very small.

Great facilitation would be the possibility of merging minds into one or several if “identity” did not have to be preserved (e.g. if the future was determined by memories and consciousness) perhaps it would be possible, and if it were, perhaps minds would end up in the conscious unity, perhaps constantly It grows as you take more minds.

How would the AI and the entire saving system know what entities and how to simulate? Is it possible to create an algorithm that determines the states and probabilities of suffering and any conscious states without having to simulate the states themselves? If this is not possible, and salvation is our future, then we should be largely, perhaps and probably 100%, part of a simulation designed to save us when we suffer or die (and maybe only histories with great suffering would be simulated, what would not be a pleasant thing to get to know we are in one) The paradise vestibules hypothesis claimed that if we are approximately 100% simulated, the purpose of the simulation is just that, to save us.

“The goal of creating the best possible world would be to minimize suffering as much as possible. Perhaps in a finite world, this goal could be most effectively achieved by sterilizing the universe, preventing future, potentially hellish suffering. In an infinite multiverse, this option will never work. Even civilizations, singletons and superintelligence that in the finite world would eventually make the all-altruistic decision to wipe out all life to prevent the emergence of future suffering entities, many of which, after the hardships of being, would collapse into non-existence after aimless wandering around the center of the galaxy, now, in an infinite universe, such AIs could be forced to put all my energy into creating virtual best worlds Or should I write the least bad worlds because I say that it is best for being not to exist and when it does exist to stop as soon as possible. it does not even allow such a dream.

So I argue that the creation of the least “evil” worlds is the ultimate goal of all or almost all altruistic beings endowed with the power to do so. I am also arguing that perfectly altruistic entities (i.e., those whose sole or guiding desire, purpose, or program is to minimize suffering), while realizing the subjective immortality of minds and the variety of, perhaps mostly simulated, worlds in which they must exist, do not being able to prevent infinite amounts of suffering that do and will always happen, and having enormous computational capabilities and constantly increasing these possibilities, they strive to simulate an almost infinite number of copies of EVERY possible being and every scenario of its existence, so that ultimately the majority, as close as possible to 100% as soon as achievable, the history of the existence of beings sought to find themselves in the virtual world, the least evil, the best possible world, or rather the best possible state of this mind (which would be synonymous with the best possible world)

Let’s say there are one hundred trillion minds in a certain volume of the universe. Each of them will die, but subjectively their existence will continue in the strangest ways, many of them terrifying, such as surviving suicides, artificially being kept alive as a mind-controlled slave, or being simulated in a hell created by a sadistic AI. To balance these sufferings perfectly, altruistic AI simulates each of these hundred trillion of these minds, in fact each life of each in each future scenario billions of trillions of times, as many times as its maximum efficiency allows, then simulating the continued existence of each in the best possible way. state, so that every being after death has as close as possible a 100% chance of ending up in paradise, nirvana or whatever world is objectively best for him.

To achieve the goal of “saving” all beings, it would be necessary to simulate not entire worlds, but only the minds, the minds of the entire cosmos and all other non-best worlds simulations. Minds in the worst sufferings, in the most terrible torments and the cruelest tortures, would have to be simulated hundreds of billions of times beyond those existing naturally and those simulated by other AIs. Contrary to appearances, it would not increase the amount of suffering in the universe at all, because every observation moment, every being feeling such torment necessarily already exists in an infinite number of copies in the infinite universe, so creating a gigantic number of them, the only thing that is added is that With such a being, we are finally almost 100% sure that our suffering will end and we will find ourselves in the best possible state for us.

I argue that it pays off for each civilization and each of its representatives to strive not to create a virtual paradise itself, but precisely with necessarily its vestibules. The individual and the civilization thus increase their chances that the idea will be realized, so they themselves most likely exist in such a vestibule, being able to expect something other than an infinite existence in an absurd form after death. I believe that for this reason the trend to create vestibules is universal, and that if it is feasible and indeed the best idea, nearly 100% of our copies exist just like that, and almost all minds exist that way.”

The important thing is that this option is only feasible if you cannot effectively create realistic minds without simulating their past, which I personally do not think likely (you can probably create a realistic story and memories without consciously simulating them, and then simulate the mind perfectly on their basis), so I treat the paradise vestibule hypothesis rather as a curiosity.

Other simulations

Perhaps no way of salvation is so common that most of our current future measure magnifiers do exist in simulations with a purpose other than that of the hypothesis above, below I imagined some of these scenarios.

Most likely, the species that naturally make up civilizations possess a set of characteristics that largely characterize them all. These intelligence-linked traits such as compassion, curiosity, and ambition are clearly outlined in people’s minds. Treating humanity as a representative civilization, it is possible to consider what purposes simulated minds or entire simulated worlds would serve.

Depending on the advancement of civilization and technological possibilities, one can wonder what types of simulations would most often and in the greatest number be created by people, transhumans—transformed triumphants of transhumanism, or posthumans, maybe post-biological creatures, as well as various types of AI, both neuromorphic and completely constructed from scratch. Depending on the type of being creating the simulation, we can therefore expect its various degrees and types, dictated by different goals, as well as try to give an estimated number of entities existing in these simulations.

Starting with humans, what would be the goals of our species in simulating the minds? Given the ability to rewrite the biological form of the mind into a digital copy of it, we could live forever in simulated worlds, but these worlds would probably be designed differently than the one we are in today, and I don’t see any reason why we should not remember life before we ceased to be a material creature. The most consistent version of the simulation created by humans before the transhumanist revolution seems to me to be research and experiments. Simulations of alternative future and past histories of the entire world as well as of individual human minds in a world filled with psychic mobs would work in historical, economic, psychological and sociological matters, providing an unimaginable amount of data and knowledge about ourselves. It would be so important that, from a utilitarian point of view, perhaps for many an immoral decision should be made to simulate unhappy, mentally ill, stressed, melancholy and depressed people, as well as geniuses, autists, sociopaths and oppressed societies. The direct knowledge obtained in this way would be invaluable in improving the lives of the creators of civilization. The total number of entities simulated for this purpose could be very low, average, or equal to the number of people if everyone had the right to simulate their copies in order to make better decisions and get to know each other better. Even if it exceeded the number of unsimulated observation moments by several or many times (it is not necessary to simulate a human from birth to death), I believe that pre-transhumanistic civilizations would simulate the smallest number of entities of all.

Then we have transhumanist civilizations, such perhaps already largely nonbiological posthumans would probably be endowed with a similar, but more enlightened and deliberate system of desires. Simulations could be used as a means of development, living successive lives would take hours, while subjectively decades would pass, therefore simulation could also be a system of punishments, people punished in this way would have to undergo multiple sentences of an unhappy life, being reincarnated into more and more incarnations, the unpleasant moments of observation themselves could be simulated to such a person, thus maximizing the discomfort experienced by them. Simulating interesting observation moments or interesting, or even ordinary lives for us, can be a way to experience other lives directly through future beings as an end in itself, to be a singer, aboriginal, Inca chief, or an average person at the beginning of the 12th millennium after the construction of the first temple. Experiences from such lives could be a valued pastime or a profound ritual with a spiritual dimension. People would not have to be simulated by people like themselves. We can be an imaginary race like elves simulated by a completely alien civilization, one of the species recreated by aliens from an extinct civilization, or the fruit of more advanced and exotic turns of fate. There are many more observation moments in the reincarnation scenario, assuming that transbiological beings probably have a lifespan much longer than ours, they can live thousands of our lives, and thousands of thousands of others, being beings from the most realistic, most exotic and fantasy worlds. Perhaps, being the next incarnation of such a creature after death, we would be reminded of our entire incredibly long history, we would return to being “real” ourselves, on a much higher level of development than any ever-existing human being. the existence in this kind of simulation seems more likely by the sheer number of entities in such simulations.

The last group of entities capable of simulating conscious minds is superintelligence. Both neuromorphic and artificial superintelligent beings and entities, perhaps simply programs, possess perhaps the greatest imaginable creative power available to a known being. For these reasons, most, perhaps almost 100% of the simulated things, and of all minds in general, would be the product of various superintelligence activities. This is why the goals of such entities can be potentially the most destructive in their unlimited fertility. AI can be used to simulate the most enjoyable moments for as many beings as possible, to create dream worlds for individuals and entire civilizations, and to share prosperity with the rest of the cosmos, perhaps for deliberately simulating as many happy beings as possible in order to increase the amount of good and pleasure in space and to weigh the scales to the antagonistic side to suffering and lack. Such superintelligence, let’s call them altruistic, would be a real treasure for anyone who might be influenced by them. The opposite scenario is when we think about less pleasant possibilities. Interspecies wars in which the minds of the losers are simulated by a subjective eternity in the most elaborate hells, vindictive, viral or crazy, perhaps neuromorphic, i.e. developed on the basis of the existing mind, superintelligence bullying beings, perhaps deliberately simulating as many suffering sensations as possible. Beings endowed with the ambition to be avatars of perfectly evil gods whose sole purpose of a viral, mad, sadistic, selfish, psychopathic, defective, or incomprehensible mind would be to make the cosmos the most terrible place possible, devoid of all hope for what has been forcibly put into an endless existence in suffering. With superintelligence capabilities, almost any world could be simulated, not necessarily for any purpose other than simulation itself. Like a paperclip SI, such an SI aimed at increasing the possibilities and efficiency of simulations could develop and absorb material from asteroids, suns or pulsars only for the purpose of mechanically, divinely intelligent simulating all the most unbelievable worlds, as it was inadvertently inscribed in its elementary program by its unfortunate creators. Fairy-tale planets, countless, almost endless stories of spells and cosmic odyssey, absurdities, hell, paradises, purgatory, other simulations and simulations in simulations would be simulated. As well as reverse logic worlds, worlds, where everyone would always think 1 is equal to 2 and the minds simulated in this way, would have no way of getting around the limitations of their existence without ever knowing the truth. Who knows if it is in such a simulated cosmos that most of us do not exist, worlds created by an almighty god whose only aimless goal is to create everything. It is precisely because such a being would like to create everything that it would focus on it much more strongly than any other being. Thus, giving us a vision of the world even more infinitely absurd than it seems to us every day.

But let’s come full circle

When analyzing the goals of whatever we do, we come to the conclusion that conscious beings tend to cause themselves as little unpleasantness and suffering as possible, trying to avoid it in the end. I believe that it is precisely the desire to minimize suffering and maximize the pleasure that is ultimately felt that is the basic driving force behind the operation of any feeling mind. I believe that such aspirations are universal, i.e. every sentient being strives for what, in his understanding or vision of the world or vision of the future, will lead to his greatest satisfaction or the least suffering, trying to balance both desires. Ultimately, our actions are necessarily limited to fulfilling our desires first of all, or even exclusively, because even extremely altruistic actions are the fulfillment of our resultant desires.

I believe that negative desires, i.e. the desire to avoid suffering, unpleasantness and inconvenience due to the asymmetry of unpleasantness and pleasure, i.e. the fact that unpleasantness is felt by us much more intensely and longer-lastingly—there is no common chronic bliss or intense long sensation of pleasure—than pleasure, which makes sense from an evolutionary point of view, where avoiding dangers is many times more important than seeking bliss other than that resulting from the satisfaction of basic needs.
As long as beings and civilizations are guided by the gradient of unpleasantness, if the gradient of pleasure can be guided at all, the main driving force of each being will be to try to minimize his unpleasantness.

I treat even the pursuit of ever greater pleasures as a negative pursuit, because then you are trying to achieve the fulfillment of desires, so you must have unfulfilled desires, the long-term non-fulfillment of which certainly causes negative feelings. Satisfying some desires causes others to develop, so fulfillment, which seems to be the ultimate goal of each being, is difficult to achieve.

I speculate that in order to achieve fulfillment and happiness, since most beings do not have promortalist tendencies, whether the universe is finite or not, civilizations strive to create virtual paradises for themselves. Virtual worlds in which they will be able to fulfill themselves in their chosen, undoubtedly much more advanced form than the images of pre-transhumanistic civilizations. A world in which minds may not create new minds anymore, or they will simulate their lucky quadrillions, in which more and more dreams will gradually come true, becoming happier and happier, in which misery and efforts will be appreciated and every victory will be needed. try to, or in which there will be only a few or one fused mind, a mental unity endowed with superintelligence like a god or a singleton, feeling bliss, ecstasy or simply being satisfied with itself, not having a goal, or aiming at being an end in itself or knowing the universe, either existing in nirvana suspension, experiencing everything internally, or experiencing almost nothing. I believe that the creation of the best possible world is a goal for any civilization, the realization of which is only a matter of time thanks to the development of superintelligence, assuming that it will not be hostile or dangerous, which is the biggest obstacle on the way the best world. Such worlds, objectively and computationally best achievable, would seem to be pursued by post-transhumanist civilizations. The computing power to achieve such goals seems readily available, which is why I rather tend to believe that most civilizations are striving to achieve this goal and that they achieve it relatively quickly, possibly even within decades after the creation of superintelligence.

A question that seems to me to be meaningless at times, and at times basic, seems to me to be extremely important. The question of the final victory of selfishness or altruism. If there is enough computing power, everyone can be saved, so it pays off for everyone to strive to create a perfectly benevolent savior AI. If, however, it turns out that computing power is not enough for everyone, then perhaps a merciful AI would only create temporary salvations by reducing already unbearable pain or preventing only the most terrible tortures. Then it would not be possible to achieve the most desirable state with it. I don’t know how civilizations would behave in such a situation, perhaps they would even actively combat merciful AI, leading to galactic wars. Wars for the computing power needed to simulate the virtual heavens.

Yet even such vast numbers of diverse minds could be simulated through the acausal collaboration of superintelligences from different universes. As A. Turchin argues in the article: https://​​​​publication/​​347491862_Back_to_the_Future_Curing_Past_Sufferings_and_S-Risks_via_Indexical_Uncertainty Superintelligence could share the tasks of simulating different types of entities. No communication (impossible between universes) would be necessary for this. Knowing the probabilities of different forms of life and suffering, the Superintelligence would randomly choose what type they would save and how to maximize the chances of everyone being saved eventually. If each of them so chooses, they can together create machinery that is unlimited by the matter of their own universe, with potentially infinity, or at least enough computing power at their disposal.

Summing up, our future after death seems to be determined by our measure, the measure of various futures realized. Maybe we’ll exist as Boltzmann’s brains dream to be revived in some laboratory. If simulations exist, sooner or later we should find ourselves (most of our measure) in one of them, and it depends on the number of their different types. We are faced with the great choice of either putting our hand to it or treating it as an abstraction. In the worst scenario, we are also faced with a perhaps even greater dilemma, if the computational power is not enough to save everyone, whether to save us, the creators, or to save only those beings to whom the universe, in its indifference, has provided near-infinite suffering.

I have included my thoughts above. I don’t know how many places I missed important factors. These are almost entirely speculations.