The Dilemma of Worse Than Death Scenarios

In this post I will write about what worse than death scenarios are, and how and why we should prevent them. I would recommend reading with caution if you are prone to worrying about this topic as this post contains ideas which may be very distressing.

A worse than death scenario can be defined as any scenario in which the observer would prefer to die than continue to live. A distinction is made between prefering to experience nothing and then resume living and prefering to experience nothing forever. For example, mos people would prefer to use general anaesthetic during a necessary operation, but if there was no anaesthetic, they would not choose to die. With the knowledge that the discomfort experienced during the operation is necessary to continue living, many would choose to go through with it (this can obviously vary depending on the operation and the observer).

As the observer would prefer to die in a worse than death scenario, one can assume that they would be willing to do anything to escape the scenario. Thus, it follows that we should do anything to prevent worse than death scenarios from occurring in the first place. It is our first priority. In my opinion positive scenarios cannot change this due to the following observation: there is no positive scenario in which you would do anything to *not* make it stop, at least in our current human form. I cannot think of any scenario which is so positive for the observer that they don’t care if it is certain that it will kill them.

Worse than death scenarios vary in severity. The most basic example would be someone being kidnapped and tortured to death. If technology will allow immortality or ASI at some point, there are scenarios of much greater severity. The most extreme example would be an indefinite state of suffering comparable to the biblical Hell, perhaps caused by an ASI running simulations. Obviously preventing this has a higher priority than preventing scenarios of a lower severity.

Scenarios which could mean indefinite suffering:

1. ASI programmed to maximise suffering

2. Alien species with the goal of maximising suffering

3. We are in a simulation and some form of “hell” exists in it

4. ASI programmed to reflect the values of humanity, including religious hells

5. Unknown unknowns

Worse than death scenarios are highly neglected. This applies to risks of all severities. It seems very common to be afraid of serial killers, yet I have never heard of someone with the specific fear of being tortured to death, even if most people would agree that the latter is worse. This pattern is also seen in the field of AI: the “killer robot” scenario is very well-known, as is the paperclip maximiser, but the idea of an unfriendly ASI creating suffering is not talked about as often.

There are various reasons for this neglect. Firstly, worse than death scenarios are very unpleasant to think about. It is more comfortable to discuss possibilities of ceasing to exist. In addition, they are very unlikely compared to other scenarios. However, the avoidance of the discussion of worse than death scenarios does not seem correct because something being unpleasant is not a valid reason to do this. In addition, the very low probability of the scenarios is balanced by their extreme disutility. This inevetitably leads to Pascal’s Mugging.

Methods which may reduce the probability of indefinite worse than death scenarios (in order of effectiveness):

1. Suicide

2. Working on AI safety

3. Thinking of ways of reducing the probability

Suicide, depending on your theory on personal identity, may make the probability 0. If you believe that there is no difference between copies of you then there may be a possibility of being resurrected in the future however. As we aren’t certain about what happens to the observer after death, it is unknown whether death will make worse than death scenarios impossible. I believe there are many ways in which it could reduce the probability, but the key question is: could it increase the probability? An argument against suicide is that it is more likely that people who commit suicide will go to “hell” than those who don’t. This is because an entity who creates hell has values which accept suffering, making life a positive concept which should not be discarded. On the other hand, an entity with values related to efilism/​antinatalism (philosophies in which suicide is generally accepted) would not create a hell at all. Of course, this is all based on a lot of speculation.

There is a risk that the suicide attempt will fail and leave you in a disabled state. This could make you more vulnerable when considering indefinite worse than death scenarios. However, I would argue against this disadvantage because the only potential way to evade an entity powerful enough to cause these scenarios would be suicide, which always has a risk of failing.

The second option listed is working on AI safety. This is due to the fact that a future ASI is the only entity which we could influence now. We can not do anything about superintelligent malevolent aliens or the fact that we may be in a simulation, on the other hand. Donating money to suffering-focused AI safety organizations may reduce the chance of an unfriendly ASI being created, and it does not seem to increase the probability of worse than death scenarios in any way. Therefore it seems better than not donating.

The last option is thinking of ways of reducing the probability of the scenarios. It is possible that by doing this you will invent a new method. This also includes raising awareness about the scenarios in any way so that other people will also try to invent methods.

The dilemma is that it does not seem possible to continue living as normal when considering the prevention of worse than death scenarios. If it is agreed that anything should be done to prevent them then Pascal’s Mugging seems inevitable. Suicide speaks for itself, and even the other two options, if taken seriously, would change your life. What I mean by this is that it would seem rational to completely devote your life to these causes. It would be rational to do anything to obtain money to donate to AI safety for example, and you would be obliged to sleep for exactly nine hours a day to improve your mental condition, increasing the probability that you will find a way to prevent the scenarios. I would be interested in hearing your thoughts on this dilemma and if you think there are better ways of reducing the probability.