Not all tail risk is created equal. Assume your remaining natural lifespan is L years, and revival tech will be invented R years after that. Refusing to kill yourself is effectively betting that no inescapable worse-than-death future will occur in the next L years; refusing cryonics is effectively betting the same, but for the next L + R years.
Assuming revival tech is invented only after you die, the probability of ending up in some variation of hell is strictly greater with cryonics than without it—even if both chances are very small—simply because hell has more time to get started.
It’s debatable how large the difference is between the probabilities, of course. But some risk thresholds legitimately fall between the two.
(upvoting even though I disagree with your conclusion—I think it’s an interesting line of thought)
Upvoted—I agree that the probability is higher if you do cryonics.
However, a lot of the framing of this discussion is that “if you choose cryonics, you are opening up Pandora’s box because of the possibility of worse-than-death outcomes.” This triggers all sort of catastrophic cognitions and causes people to have even more of an ugh field around cryonics. So I wanted to point out that worse than death outcomes are certainly still possible even if you don’t do cryonics.
I think the argument is more “if I’m going to consider beneficial but unlikely outcomes such as successful cryonic revival, then harmful but unlikely outcomes also come on to the table”.
A normal life may have a small probability of a worse-than-death scenario, but we’re not told to consider small probabilities when considering how good a normal life is.
It missed in all story that superintelligecne will be probably able to resurrect people even if they were not cryopreserved, using creation of their copies based on digital immortality. The problem of identity of a copy and original is not solved, but AI may be able to solve it somehow.
However, similar to different cardinalities of infinities, there are different types of infinite sufferings. Evil AI could constantly upgrade its victim, so it subjective experiences of sufferings will increase million times a second forever, and it could convert half a galaxy into suffertronium.
Quantum immortality in a constantly dying body is not optimised for aggressive growth of sufferings, so could be more “preferable”.
Unfortunately, such timelines in the space of all possible minds could merge, that is after death you will appear in a very improbable universe, where you are resurrected for having sufferings. (I also use here and in the next sentence a thesis that if two observer-moments are identical, their timelines merge, which may require a longer discussion.)
But benevolent AI could create an enormous amount of positive observer-moments following any possible painful observer-moment, that it will effectively rescue any conscious beings from jail of evil AI. So any painful moment will have million positive continuations with much higher measure than a measure of universes owned by evil AI. (I also assume that benevolent AIs will dominate over suffering-oriented AIs, and will wage acasual war against them to have more observer-moments of human beings.)
After I imagined such acasual war between evil and benevolent AI, I stopped worry about infinite suffering from evil AI.
It missed in all story that superintelligecne will be probably able to resurrect people even if they were not cryopreserved, using creation of their copies based on digital immortality.
Enough of what makes me me hasn’t and won’t make into digital expression by accident short of post-singularity means, that I wouldn’t identify such a poor individual as being me. It would be neuro-sculpture on the theme of me.
You may also not identify with you the person who will get up in your body after night sleep tomorrow. There will be large informational and biochemical changes in your brain, as well as a discontinuity of the stream of consciousness during deep sleep.
I mean that an attempt to deny identity with your copies will result in even larger paradoxes.
I don’t buy it. Why don’t you wake up as Britney Spears instead? Clearly there’s some information in common between your mind patterns. She is human after all (at least I’m pretty sure).
Clearly there is a sufficient amount of difference that would make your copy no longer you.
I think it is probable that cryonics will preserve enough information, but I think it is nigh impossible that my mere written records could be reconstructed into me, even by a superintelligence. There is simply not enough data.
But given Many Worlds, a superintelligence certainly could attempt to create every possible human mind by using quantum randomization. Only a fraction of these could be realized in any given Everett branch, of course. Most possible human minds are insane, of course, since their memories would make no sense.
Given the constraint of “human mind” this could be made more probable than Boltzmann Brains. But if the Evil AI “upgrades” these minds, then they’d no longer fit that constraint.
We still don’t know, how much information is needed for informational identity. It could be a rather small set of core data, which helps me to find a difference between me and Britney Spits.
Superintelligence also could exceed us in information gathering about the past and have radically new ideas. Quantum randomization in MWI is one possible approach, but such randomization may be done only about “unknown important facts”. For example, if it is unknown, if I love my grandmother, it could be replaced by a random bit. So in the two branches of the multiverse there will be two people, but none of them will be insane.
Also, there could take place something like acasul trade between different resurrection engines in different parts of the iniverse. Because if me-how-didn’t-like grandmother is not real me, it could correspond to another person, that need to be resurrected, but existed in the branch of MWI that splitted around 1900. As a result, all the dead will be easily resurrected, and no combinational explosion, measure problems, or insane minds will appear.
Another option for superintelligence is to run resurrection simulations, which recreate all the world from the beginning, and sideloading in it all available data.
Not all tail risk is created equal. Assume your remaining natural lifespan is L years, and revival tech will be invented R years after that. Refusing to kill yourself is effectively betting that no inescapable worse-than-death future will occur in the next L years; refusing cryonics is effectively betting the same, but for the next L + R years.
Assuming revival tech is invented only after you die, the probability of ending up in some variation of hell is strictly greater with cryonics than without it—even if both chances are very small—simply because hell has more time to get started.
It’s debatable how large the difference is between the probabilities, of course. But some risk thresholds legitimately fall between the two.
(upvoting even though I disagree with your conclusion—I think it’s an interesting line of thought)
Upvoted—I agree that the probability is higher if you do cryonics.
However, a lot of the framing of this discussion is that “if you choose cryonics, you are opening up Pandora’s box because of the possibility of worse-than-death outcomes.” This triggers all sort of catastrophic cognitions and causes people to have even more of an ugh field around cryonics. So I wanted to point out that worse than death outcomes are certainly still possible even if you don’t do cryonics.
I think the argument is more “if I’m going to consider beneficial but unlikely outcomes such as successful cryonic revival, then harmful but unlikely outcomes also come on to the table”.
A normal life may have a small probability of a worse-than-death scenario, but we’re not told to consider small probabilities when considering how good a normal life is.
It missed in all story that superintelligecne will be probably able to resurrect people even if they were not cryopreserved, using creation of their copies based on digital immortality. The problem of identity of a copy and original is not solved, but AI may be able to solve it somehow.
However, similar to different cardinalities of infinities, there are different types of infinite sufferings. Evil AI could constantly upgrade its victim, so it subjective experiences of sufferings will increase million times a second forever, and it could convert half a galaxy into suffertronium.
Quantum immortality in a constantly dying body is not optimised for aggressive growth of sufferings, so could be more “preferable”.
Unfortunately, such timelines in the space of all possible minds could merge, that is after death you will appear in a very improbable universe, where you are resurrected for having sufferings. (I also use here and in the next sentence a thesis that if two observer-moments are identical, their timelines merge, which may require a longer discussion.)
But benevolent AI could create an enormous amount of positive observer-moments following any possible painful observer-moment, that it will effectively rescue any conscious beings from jail of evil AI. So any painful moment will have million positive continuations with much higher measure than a measure of universes owned by evil AI. (I also assume that benevolent AIs will dominate over suffering-oriented AIs, and will wage acasual war against them to have more observer-moments of human beings.)
After I imagined such acasual war between evil and benevolent AI, I stopped worry about infinite suffering from evil AI.
I’m glad I’m not the only one who thinks about this kind of stuff.
Enough of what makes me me hasn’t and won’t make into digital expression by accident short of post-singularity means, that I wouldn’t identify such a poor individual as being me. It would be neuro-sculpture on the theme of me.
You may also not identify with you the person who will get up in your body after night sleep tomorrow. There will be large informational and biochemical changes in your brain, as well as a discontinuity of the stream of consciousness during deep sleep.
I mean that an attempt to deny identity with your copies will result in even larger paradoxes.
I don’t buy it. Why don’t you wake up as Britney Spears instead? Clearly there’s some information in common between your mind patterns. She is human after all (at least I’m pretty sure).
Clearly there is a sufficient amount of difference that would make your copy no longer you.
I think it is probable that cryonics will preserve enough information, but I think it is nigh impossible that my mere written records could be reconstructed into me, even by a superintelligence. There is simply not enough data.
But given Many Worlds, a superintelligence certainly could attempt to create every possible human mind by using quantum randomization. Only a fraction of these could be realized in any given Everett branch, of course. Most possible human minds are insane, of course, since their memories would make no sense.
Given the constraint of “human mind” this could be made more probable than Boltzmann Brains. But if the Evil AI “upgrades” these minds, then they’d no longer fit that constraint.
We still don’t know, how much information is needed for informational identity. It could be a rather small set of core data, which helps me to find a difference between me and Britney Spits.
Superintelligence also could exceed us in information gathering about the past and have radically new ideas. Quantum randomization in MWI is one possible approach, but such randomization may be done only about “unknown important facts”. For example, if it is unknown, if I love my grandmother, it could be replaced by a random bit. So in the two branches of the multiverse there will be two people, but none of them will be insane.
Also, there could take place something like acasul trade between different resurrection engines in different parts of the iniverse. Because if me-how-didn’t-like grandmother is not real me, it could correspond to another person, that need to be resurrected, but existed in the branch of MWI that splitted around 1900. As a result, all the dead will be easily resurrected, and no combinational explosion, measure problems, or insane minds will appear.
Another option for superintelligence is to run resurrection simulations, which recreate all the world from the beginning, and sideloading in it all available data.