How does cryonics make sense in the age of high x-risk? As p(doom) increases, cryonics seems like a worse bet because (1) most x-risk scenarios would result in frozen people/infrastructure/the world being destroyed and (2) revival/uploading would be increasingly likely to be performed by a misaligned ASI hoping to use humans for its own purposes (such as trade). Can someone help me understand what I’m missing here or clarify how cryonics advocates think about this?
It makes the same kind of sense as still planning for a business-as-usual 10-20 year future. There are timelines where the business-as-usual allocation of resources helps, and allocating the resources differently often doesn’t help with the alternative timelines. If there’s extinction, how does not signing up for cryonics (or not going to college etc.) make it go better? There are some real tradeoffs here, but usually not very extreme ones.
It makes the same kind of sense as still planning for a business-as-usual 10-20 year future.
Agreed. I don’t know whether that approach to planning makes sense either, though. Given a high (say 90%)[1] p(doom) in the short term, would a rational actor change how they life their life? I’d think yes, in some easier ways to accept (assigning a higher priority to short-term pleasure, maybe rethinking effortful long-term projects that involve significant net suffering in the short term) as well as some less savoury ways that would probably be irresponsible to post online but would be consistent with a hedonistic utilitarian approach (i.e., prioritizing minimizing suffering).
Choosing 90% because that’s what I would confidently bet on—I recognize many people in the community would assign a lower probability to existential catastrophe at this time.
I straightforwardly agree that the more likely I am to die of x-risk, the less good of a deal, probabalistically, cryonics is.
(I don’t particularly buy the cryonics patients are more likely to be utilized by misaligned superintelligences than normally living humans. Cryopreserving destroys structure that the AI would have to reconstruct, which might be cheap, but isn’t likely to be cheaper than just using intact brains, scanned with superintelligently developed technology.
But, yep, to the extent that living through an AI takeover might entail an AI doing stuff with your brain-soul that you don’t like, being cryopreserved also exposes you to that risk.)
Thank you so much for the response! Great to hear from someone actually bought in to cryonics :-)
I don’t particularly buy the cryonics patients are more likely to be utilized by misaligned superintelligences than normally living humans. Cryopreserving destroys structure that the AI would have to reconstruct, which might be cheap, but isn’t likely to be cheaper than just using intact brains, scanned with superintelligently developed technology.
I think the cardinal distinction I see here is that if there’s a delay between when uploads are announced and when they occur, living people retain the option to end their lives. I think this distinction is meaningful insofar as one would prefer death over a high probability of indefinite (or perpetual) suffering.
It is actually done only to patients who are clinically dead, as a last chance to survive. The patients who weren’t resurrected don’t lose anything except for the hope to come back to life.
I understand that. I’m not sure I understand your point here, though—wouldn’t it still be an arguably poor use of effort to sign up for cryonics if likely outcomes ranged from (1) an increasingly unlikely chance of people being revived, at best, to (2) being revived by a superintelligence with goals hostile to those of humanity, at worst?
How does cryonics make sense in the age of high x-risk? As p(doom) increases, cryonics seems like a worse bet because (1) most x-risk scenarios would result in frozen people/infrastructure/the world being destroyed and (2) revival/uploading would be increasingly likely to be performed by a misaligned ASI hoping to use humans for its own purposes (such as trade). Can someone help me understand what I’m missing here or clarify how cryonics advocates think about this?
This is my first quick take—feedback welcome!
It makes the same kind of sense as still planning for a business-as-usual 10-20 year future. There are timelines where the business-as-usual allocation of resources helps, and allocating the resources differently often doesn’t help with the alternative timelines. If there’s extinction, how does not signing up for cryonics (or not going to college etc.) make it go better? There are some real tradeoffs here, but usually not very extreme ones.
Thank you for your thoughtful response.
Agreed. I don’t know whether that approach to planning makes sense either, though. Given a high (say 90%)[1] p(doom) in the short term, would a rational actor change how they life their life? I’d think yes, in some easier ways to accept (assigning a higher priority to short-term pleasure, maybe rethinking effortful long-term projects that involve significant net suffering in the short term) as well as some less savoury ways that would probably be irresponsible to post online but would be consistent with a hedonistic utilitarian approach (i.e., prioritizing minimizing suffering).
Choosing 90% because that’s what I would confidently bet on—I recognize many people in the community would assign a lower probability to existential catastrophe at this time.
I’m signed up for Alcor.
I straightforwardly agree that the more likely I am to die of x-risk, the less good of a deal, probabalistically, cryonics is.
(I don’t particularly buy the cryonics patients are more likely to be utilized by misaligned superintelligences than normally living humans. Cryopreserving destroys structure that the AI would have to reconstruct, which might be cheap, but isn’t likely to be cheaper than just using intact brains, scanned with superintelligently developed technology.
But, yep, to the extent that living through an AI takeover might entail an AI doing stuff with your brain-soul that you don’t like, being cryopreserved also exposes you to that risk.)
Thank you so much for the response! Great to hear from someone actually bought in to cryonics :-)
I think the cardinal distinction I see here is that if there’s a delay between when uploads are announced and when they occur, living people retain the option to end their lives. I think this distinction is meaningful insofar as one would prefer death over a high probability of indefinite (or perpetual) suffering.
It is actually done only to patients who are clinically dead, as a last chance to survive. The patients who weren’t resurrected don’t lose anything except for the hope to come back to life.
I understand that. I’m not sure I understand your point here, though—wouldn’t it still be an arguably poor use of effort to sign up for cryonics if likely outcomes ranged from (1) an increasingly unlikely chance of people being revived, at best, to (2) being revived by a superintelligence with goals hostile to those of humanity, at worst?