Why I think worse than death outcomes are not a good reason for most people to avoid cryonics

Content note: torture, suicide, things that are worse than death

Follow-up to: http://​​lesswrong.com/​​r/​​discussion/​​lw/​​lrf/​​can_we_decrease_the_risk_of_worsethandeath/​​

TLDR: The world is certainly a scary place if you stop to consider all of the tail risk events that might be worse than death. It’s true that there is a tail risk of experiencing one of these outcomes if you choose to undergo cryonics, but it’s also true that you risk these events by choosing not to kill yourself right now, or before you are incapacitated by a TBI or neurodegenerative disease. I think these tail risk events are extremely unlikely and I urge you not to kill yourself because you are worried about them, but I also think that they are extremely unlikely in the case of cryonics and I don’t think that the possibility of them occurring should stop you from pursuing cryonics.

I

Several members of the rationalist community have said that they would not want to undergo cryonics on their legal deaths because they are worried about a specific tail risk: that they might be revived in a world that is worse than death, and that doesn’t allow them to kill themselves. For example, lukeprog mentioned this in a LW comment:

Why am I not signed up for cryonics?

Here’s my model.

In most futures, everyone is simply dead.

There’s a tiny sliver of futures that are better than that, and a tiny sliver of futures that are worse than that.

What are the relative sizes of those slivers, and how much more likely am I to be revived in the “better” futures than in the “worse” futures? I really can’t tell.

I don’t seem to be as terrified of death as many people are. A while back I read the Stoics to reduce my fear of death, and it worked. I am, however, very averse to being revived into a worse-than-death future and not being able to escape.

I bet the hassle and cost of cryonics disincentivizes me, too, but when I boot up my internal simulator and simulate a world where cryonics is free, and obtained via a 10-question Google form, I still don’t sign up. I ask to be cremated instead.

Cryonics may be reasonable for someone who is more averse to death and less averse to worse-than-death outcomes than I am. Cryonics may also be reasonable for someone who has strong reasons to believe they are more likely to be revived in better-than-death futures than in worse-than-death futures. Finally, there may be a fundamental error in my model.

#####

In this post I’m going to explain why I think that, with a few stipulations, the risks of these worse-than-death tail events occurring are close to what you might experience by choosing to undergo your natural lifespan. Therefore, based on revealed preference, in my opinion they are not a good reason for most people to not undergo cryonics. (Although there are, of course, several other reasons for which you might choose to not pursue cryonics, which will not be discussed here.)

II

First, some points about the general landscape of the problem, which you are welcome to disagree with:

- In most futures, I expect that you will still be able to kill yourself. In these scenarios, it’s at least worth seeing what the future world will be like so you can decide whether or not it is worth it for you.

- Therefore, worse-than-death futures are exclusively ones in which you are not able to kill yourself. Here are two commonly discussed scenarios for this, and why I think they are unlikely:

-- You are revived as a slave for a future society. This is very unlikely for economic reasons: a society with sufficiently advanced technology that it can revive cryonics patients can almost certainly extend lifespan indefinitely and create additional humans at low cost. If society is evil enough to do this, then creating additional humans as slaves is going to be cheaper than reviving old ones with a complicated technology that might not work.

-- You are revived specifically by a malevolent society/​AI that is motivated to torture humans. This is unlikely for scope reasons: any society/​AI with sufficiently advanced technology to do this can create/​simulate additional persons that will to fit their interests more precisely. For example, an unfriendly AI would likely simulate all possible human/​animal/​sentient minds until the heat death of the universe, using up all available resources in the universe in order to do so. Your mind, and minds very similar to yours, would already likely be included in these simulations many times over. In this case, doing cryonics would not actually make you worse off. (Although of course you would already be quite badly off and we should definitely try our best to avoid this extremely unlikely scenario!)

If you are worried about a particular scenario, you can stipulate to your cryonics organization that you would like to be removed from preservation in intermediate steps that make that scenario more likely, thus substantially reducing the risk of them occurring. For example, you might say:

- If a fascist government that tortures its citizens indefinitely and doesn’t allow them to kill themselves seems likely to take over the world, please cremate me.

- If an alien spaceship with likely malicious intentions approaches the earth, please cremate me.

- If a sociopath creates an AI that is taking over foreign cities and torturing their inhabitants, please cremate me.

In fact, you probably wouldn’t have to ask… in most of these scenarios, the cryonics organization is likely to remove you from preservation in order to protect you from these bad outcomes out of compassion.

But even with such a set of stipulations or compassionate treatment by your cryonics organization, it’s still possible that you could be revived in a worse-than-death scenario. As Brian Tomasik puts it:

> Yeah, that would help, though there would remain many cases where bad futures come too quickly (e.g., if an AGI takes a treacherous turn all of a sudden).

However, here I would like to point out an additional point: there’s no guarantee that these bad scenarios couldn’t happen too quickly for you to react today, or in the future before your legal death.

If you’re significantly worried about worse than death outcomes happening in a possible future in which you are cryopreserved, then it seems like you should also be worried about one of them happening in the relatively near term as well. It also seems that you should be anti-natalist.

III

You might argue that this is still your true rejection, and that while it’s true that a faster-than-react-able malevolent agent could take over the world now or in the near future, you would rather trust yourself to kill yourself than trust your cryonics organization take you out of preservation in these scenarios.

This is a reasonable response, but one possibility that you might not be considering is that you might undergo a condition that renders you unable to make that decision.

For example, people can live for decades with traumatic brain injuries, with neurodegenerative diseases, in comas, or other conditions that prevent them from making the decision to kill themselves but retain core aspects of their memories personality that make them “them” (but perhaps is not accessible because of damage to communication systems in the brain). If aging is slowed, these incapacitating conditions could last for longer periods of time.

It’s possible that while you’re incapacitated by one of these unfortunate conditions, a fascist government, evil aliens, or a malevolent AI will take over.

These incapacitating conditions are each somewhat unlikely to occur, but if we’re talking about tail events, they deserve consideration. And they aren’t necessarily less unlikely than being revived from cryostasis, which is of course also far from guaranteed to work.

It might sound like my point here is “cryonics: maybe not that much worse than living for years in a completely incapacitating coma?”, which is not necessarily the most ringing endorsement of cryonics, I admit.

But my main point here is that your revealed preferences might indicate that you are more willing to tolerate some very, very small probability of things going horribly wrong than you realize.

So if you’re OK with the risk that you will end up in a worse-than-death scenario even before you do cryonics, then you may also be OK with the risk that you will end up in a worse-than-death scenario after you are preserved via cryonics (both of which seem very, very small to me). Choosing cryonics doesn’t “open up” this tail risk that is very bad and would never occur otherwise. It already exists.