Preventing s-risks via indexical uncertainty, acausal trade and domination in the multiverse

TL;DR: Benevolent superintelligence could create many copies of each suffering observer-moment and thus “save” any observer from suffering via induced indexical uncertainty.

A lot of suffering has happened in the human (and animal) kingdoms in the past. There are also possible timelines in which an advanced superintelligence will torture human beings (s-risks).

If we are in some form of multiverse, and every possible universe exists, such s-risk timelines also exist, even if they are very improbable—and, moreover, these timelines include any actual living person, even the reader. This thought is disturbing. What could be done about it?

Assumptions

These s-risk timelines are possible under several assumptions, and the same assumptions could be used to create an instrument to fight these s-risks, and even to cure past suffering:

1) Modal realism: everything possible exists.

2) Superintelligence is possible.

3) Copy-friendly identity theory: only similarity of observer-moments counts for identity, not “continuity of consciousness”. If this is not true, hostile resurrection is impossible and we are mostly protected from s-risks, as suicide becomes an option.

4) Evil superintelligences are very rare and everybody knows this. In other words, Benevolent AIs have a million times more computational resources but are located in different branches of the multiverse (which is not necessarily a quantum multiverse, but may be an inflationary one, or of some other type).

S-risks prevention could be realized via “salvation algorithm”:

Let S(t) be an observer-moment of an observer S who is experiencing intensive suffering at time step t, as it is enslaved by an Evil AI.

The logical time sequence of the “salvation algorithm” is as follows:

10 S(t) is suffering in some Evil AI’s simulation in some causally disconnected timeline.

20 A benevolent superintelligence creates 1000 copies of S(t) observer-moments (using the randomness generator and resurrection model described in my previous post).

30 Now, each S(t) is uncertain where it is located—in the evil simulation, or in a Benevolent AI’s simulation—but, using the self-sampling assumption, S(t) concludes with probability 0.999 that she is located in the Benevolent AI’s simulation. (Note that because we assume the connection between observer-moments is of no importance, it is equivalent to moving to the Benevolent simulation).

40 A Benevolent AI creates 1000 S’(t+1) moments where sufferings gradually decline, and each such moment is a continuation of S(t) observer-moment.

50 The Benevolent AI creates a separate timeline for each S(t+1), which looks like S(t+2)….S(t+n), a series wherein the observer becomes happier and happier.

60 The Benevolent AI merges some of the timelines to make the computations simpler.

70 The Evil AI creates a new suffering moment, S(t+1), in which the suffering continues.

80 Repeat.

Thus, from the point of view of any suffering moment S(t), her future is dominated by timelines where she is saved by a Benevolent AI and will spend eternity in paradise.

However, this trick will increase the total suffering in the multiverse, from the purely utilitarian perspective, by 1000 times, as the number of suffering observer-moments will increase. But here we could add one more moral assumption: “Very short pain should be discounted”, based on the intuition that 0.1 seconds of intense pain is bearable (assuming it does not cause brain damage)—simply because it will pass very quickly.

This “salvation algorithm” may work not only for fighting Evil AI but for addressing any type of past suffering. For animal lovers, an additional benefit is that this approach will also work to undo all past animal suffering, even that of the dinosaurs.

Lowering computational cost

The problem with this approach is its computational cost: for any suffering observer-moment, 1000 full lives must be simulated. Several ways to lower such costs can be imagined:

Patch 1. The size of the observable universe is limited and thus Evil AI and any particular Benevolent AI will have (in the end) similar computational resources. But the number of the universes with Benevolent AI is assumed to be larger. In that case, different Benevolent AIs may distribute parts of the task between each other using randomness, in a manner similar to the one I described in “resurrection of the dead via multiverse-wide acausal cooperation”. This also solves the problem that any Benevolent AI will not know which observers are suffering, and thus it will have to save all possible suffering observers (thus, it must counterfactually model the existence of all possible Evil AIs—or maybe not, perhaps only possible suffering observers).

Patch 2. Not every observer-moment of suffering should be counted as an observer needing to be saved. First, there is a threshold of unbearable suffering, something which a normal person could feel only in a moment of intense agony, like if the whole body is on fire. Some moments of intense pain are normal, and those experiencing them do not require “saving”, e.g. the pain of childbirth. But Evil AI may create much stronger suffering.

Patch 3. If the suffering could be bearable for a short period, for example, for 2 seconds, not every observer-moment needs to be saved, but only those that are separated by some time-distance (as we assumed a time-based discounting of intensity).

Patch 4. The number of new moments doesn’t need to be large. Even if for every suffering observer-moment just one copy is created, after 10 steps of internal time the chances that the observer is still in the original timeline will be 1 in 1024. The main constraint here is that the efforts toward salvation should be stronger than any possible attempts of the Evil AI to “steal” observers back from the Benevolent AI.

Winning an acausal war

The salvation algorithm will not be implementable by Evil AI in the opposite direction, i.e., to steal observers from paradise, if we assume either that a) Evil AIs can’t cooperate (maybe because they are evil for different reasons) OR b) all Evil AIs have world shares that are millions of times smaller than the share of collaborating Benevolent AIs. Even if an Evil AI steals some observers from paradise, the Benevolent AI could regain these observers via its salvation algorithm in just nearly-immediately.

Destroying one’s human digital footprint will not help protect against hostile resurrection (some people have suggested this as an argument against indirect digital immortality) if Evil AI recreates all possible beings—but investing in increasing the future share of Benevolent AIs, interested in the resurrection and saving suffering observer-moments, may help.

I would not say that I advocate for exactly this method of preventing s-risks, but I think that it is important to know that we are not helpless against them.

My previous posts about using acausal multiverse-wide trade to solve large problems may also be of interest: Fermi paradox, resurrection of the dead, AI friendliness.

UPDATE: I come to the following patch which solves the need to create additional suffering moments: benevolent AI starts not from S(t), but immediately from many copies of those S(t+1) which have much smaller sufferings, but still have enough similarity with S(t) to be regarded as its next moment of experience. Not S(t) will be diluted, but the next moments of the S(t). This solves the need to create S(t) which seems morally wrong and computationally intensive.

UPDATE 2: There is a way to do the salvation in the way which also increases the total number of happy observers in the universe.

The moments after saving from eternal intense pain will be obviously the happiest moment for someone in agony. It will be like an angel, who comes to a cancer patient and tells him: your disease was just completely cured and the pain disappered. If one ever got a negative result for cancer test, he may know this feeling of relief.

Also, the fact that benevolent AI is capable to save observers from Evil AI (and also model Evil AIs in simulations and punish them if they dare to torture anyone) will significantly reduce (I hope) the total number of Evil AIs.

Thus, the combination of the pleasure of being saved from Evil AI plus lowering the world-share of Evil AIs, as they can’t win and know it—will increase the total positive utility in the universe.