I’m not signed up for cryonics. Partly, this is because I’m poor. Partly, it’s because I’m extremely risk-averse and I can imagine really really horrible outcomes of being frozen just as easily as I can imagine really really great outcomes—in the absence of people walking around who were frozen and awakened later, my imaginings are all the data I have.
I’m sorry for your loss and that of your girlfriend, and I wish her grandfather had not died. While I’m at it, I’ll wish he’d been immortal. But there are two mistaken responses to the fact that human beings die: one is to tout death as a natural and possibly even positive part of the human condition, and one is to find excuses not to deal with it when it happens. Theism with an afterlife is the first thing; freezing the dead person is the second.
In all likelihood, if and when I stop being poor, my bet and the money behind it is going to be on medicine, and maybe uploads of living people if there are very promising projects going on by then.
By “extremely risk-averse” do you mean “working hard to maximise persistence odds” or “very scared of scary scenarios”?
You’re right that death while signed up for cryonics is still a very bad thing, though. I don’t think Eliezer would be fine with deaths if they were signed up, but sometimes he makes it seem that way.
I mean something like the second thing. Basically, I invariably would rather bet one dollar than bet two when the expected utility is identical with both bets—even odds, say. And if you make it a $1000 bet versus $2000, I’ll probably prefer the first bet over the second even if the expected utility is strictly worse, simply because I can’t tolerate any risk of being out two thousand dollars. (I can’t tolerate much risk of being out a thousand either, given my poor-grad-student finances, but this is assuming I have no “don’t gamble at all” option.)
I show no particular tendency to flinch from the deaths of those near me who were not preserved. Do you think my fear of my own death is so much greater as to drive me to irrationality only there, and only on cryonics? I could as easily accuse you of sour grapes for presently not having the money to sign up. Not that I am so accusing—but be wary of who you accuse of rationalization; there are many tragedies in this universe, but you should be careful not to go around accepting the ones that aren’t inevitable.
When I spoke of “not dealing with it”, I didn’t mean to say that you do this with people who die and aren’t signed up for cryonics. (I had already read and was very moved by your piece on Yehuda.) When someone does get frozen, though, it’s easy to categorize them as “maybe not dead”—since if a frozen person weren’t maybe-not-dead, no one would be frozen.
Alicorn, not everything that is less than absolutely awful to believe, is therefore false. In the end, either the information is there in the brain or not, and that’s a question of neuroscience and the limits of possible revival tech; that’s not something which can be possibly settled by observing which answers are comforting or discomforting.
I’m obviously not being very clear. I’m not making a case that it’s irrational to sign up for cryonics—I’m just saying it’s not appropriate for someone with a very high risk-aversion, such as myself. I’m informed by the same person who taught me about levels of risk aversion in the first place that no given level of risk aversion is necessarily irrational or irrational; it’s just a personal characteristic. It’s quite possible that by making these choices you’ll be around, enjoying a great quality of life, in four thousand years, and I won’t. That would be awesome for you and less awesome for me. I’m just not willing to take the bet.
Describing this as being averse to risks doesn’t make much sense to me. Couldn’t a pro-cryonics person equally well justify her decision as being motivated by risk aversion? By choosing not to be preserved in the event of death, you risk missing out on futures that are worth living in. If you want to take this into bizarre and unlikely science fiction ideas, as with your dystopian cannon fodder speculation, you could easily construct nightmare scenarios where cryonics is the better choice. Simply declaring yourself to have “high risk aversion” doesn’t really support one side over the other here.
This reminds me of a similar trope concerning wills: someone could avoid even thinking about setting up a will, because that would be “tempting fate,” or take the opposite position: that not having a will is tempting fate, and makes it dramatically more likely that you’ll get hit by a bus the next day. Of course, neither side there is very reasonable.
I call it risk aversion because if cryonics works at all, it ups the stakes. The money dropped on signing up for it is a sure thing, so it doesn’t factor into risk, and if I get frozen and just stay dead indefinitely (for whatever reason) then all I’ve lost compared to not signing up is that money and possibly some psychological closure for my loved ones. But the scenarios in which cryonics results in me being around for longer—possibly indefinitely—are ones which could be very extreme, in either direction. I’m not comfortable with such extreme stakes: I prefer everything I have to deal with to be within my finite lifespan, in the absence of having a near-certainty about a longer lifespan being awesome.
I don’t doubt that there are some “nightmare” situations in which I’d prefer cryonics—I’d rather be frozen than spend the next seventy years being tortured, for example—but I don’t live in one of those situations.
That’s starting to sound like a general argument for shorter lifetimes over longer ones. Is there a reason this wouldn’t apply just as well to living for five more years versus fifty? There’s more room for extreme positive or negative experiences in the extra 45 years.
Not at all—I’d take straight up immortality, if somebody offered, although I’d rather have a suicide option loophole for cases where I’m the only person to survive the heat death of the universe or something. Perhaps I unduly value the (illusion of?) control over my situation. But my reasoning is about the choice as a gamble: my risk aversion makes me prefer not to take the gamble that cryonics unambiguously is, which could go well or badly and has a cost to play.
It’s not high on my list of phobias. I don’t judge the risk to be very serious. But then, the tiny risk of evil aliens isn’t opposed to a great chance of eternal bliss; it’s competing with an equally tiny chance of something very nice.
I would guess that however small the chances of being reanimated by benevolent people are, the chances of being reanimated by non-benevolent people are much smaller, just because any benevolent person with the capacity to do so cheaply will want to do so, while most non-benevolent futures I can imagine won’t bother.
Sadists exist even in the present. Unethical research programs are not unheard of in history. This is a little like saying that I shouldn’t worry about walking alone in a city at night in an area of uncertain crime rate, because if someone benevolent happens by they’ll buy me ice cream, and anyone who doesn’t wish me well will just ignore me.
But you wouldn’t choose to die rather than walk through the city, would you?
It’s hard for me to take the nightmare science fiction scenarios too seriously when the default actions comes with a well established, nonfictional nightmare: you don’t sign up for cryonics, you die, and that’s the end.
Economics are key here. What do people have to gain from taking certain actions on you/against you?
Also note that notions of “benevolence” have varied throughout the ages—and it has not been a monotonically increasing function!
There are times and places in this world when a lone drifter would have been—by default—“benevolently” enslaved by the authorities, but where this default action would change to “put to death” several decades later.
How well one is treated always depends on the economic and political power of the group you are associated with. Do our notions of lawful ownership match those of ancient civilizations? They do match in broad outlines, but in terms of specific artifacts, our notions diverge dramatically. If we somehow managed to clone Tutankhamen and recover his mind from the ether and re-implant it, what are the chances he’s going to get all of his stuff back?
OK, you’re risk averse. Specifically, you’re scared. If you put a bit of imaginative effort into it you can play out scenarios of awakening into a dystopia, or botched revival, or abusive uploading, or various nastiness. Fair enough.
I propose that you haven’t stretched your imagination far enough.
Staying in doom-n-disaster mode, what are the other ways you could suffer? Illness, madness, brain damage, disability, mistreatment, war, famine, plague, loneliness… it just goes on and on.
Switching to happy mode, what are the good scenarios? Love, long life, wealth and good ideas to use it on… again it goes on and on.
Then if you take all those scenarios, and add a whole lot more of mediocre and tolerable and mildly downbeat ones, and you scatter them out ahead of you into an imaginary branching map of infinite reachable futures. Not all equally easy to reach. There are probability assignments on each, shifting and flowing as your actions and experiences move the chances.
This sort of visualization helps me put my own worrying into perspective. Worrying is a kind of grasping for control, but the future is too big and surprising to be pinned down that way. You can’t control what you get. You can steer into a region with more good chances than bad. To do that you have to learn to discount the low chance of bad as just the price of admission.
Partly, it’s because I’m extremely risk-averse and I can imagine really really horrible outcomes of being frozen
I’m curious what the really horrible outcomes you can imagine are? That’s not something that had ever occurred to me, I can’t imagine a worse outcome than not being revived which seems to be equivalent to just being normally dead.
This is probably symptomatic of reading too much science fiction, but I could be revived by evil aliens, or awakened into a dystopian society that didn’t have enough raw materials to make robots and wanted frozen people for cannon fodder, or I could be uploaded instead of outright defrosted and then suffer a glitch that would cause eternal torment/boredom/arithmetic problems, or some form of soul theory could turn out to be right and there could be grandiose metaphysical consequences… I have a very fertile imagination.
Perhaps you have read too much science fiction and not enough history—I worry far more about what is likely to happen between now and when I can expect to die in 30-50 years based on recent history than I do about the essentially unknowable far future.
I’m not signed up for cryonics. Partly, this is because I’m poor. Partly, it’s because I’m extremely risk-averse and I can imagine really really horrible outcomes of being frozen just as easily as I can imagine really really great outcomes—in the absence of people walking around who were frozen and awakened later, my imaginings are all the data I have.
I’m sorry for your loss and that of your girlfriend, and I wish her grandfather had not died. While I’m at it, I’ll wish he’d been immortal. But there are two mistaken responses to the fact that human beings die: one is to tout death as a natural and possibly even positive part of the human condition, and one is to find excuses not to deal with it when it happens. Theism with an afterlife is the first thing; freezing the dead person is the second.
In all likelihood, if and when I stop being poor, my bet and the money behind it is going to be on medicine, and maybe uploads of living people if there are very promising projects going on by then.
By “extremely risk-averse” do you mean “working hard to maximise persistence odds” or “very scared of scary scenarios”?
You’re right that death while signed up for cryonics is still a very bad thing, though. I don’t think Eliezer would be fine with deaths if they were signed up, but sometimes he makes it seem that way.
I mean something like the second thing. Basically, I invariably would rather bet one dollar than bet two when the expected utility is identical with both bets—even odds, say. And if you make it a $1000 bet versus $2000, I’ll probably prefer the first bet over the second even if the expected utility is strictly worse, simply because I can’t tolerate any risk of being out two thousand dollars. (I can’t tolerate much risk of being out a thousand either, given my poor-grad-student finances, but this is assuming I have no “don’t gamble at all” option.)
I show no particular tendency to flinch from the deaths of those near me who were not preserved. Do you think my fear of my own death is so much greater as to drive me to irrationality only there, and only on cryonics? I could as easily accuse you of sour grapes for presently not having the money to sign up. Not that I am so accusing—but be wary of who you accuse of rationalization; there are many tragedies in this universe, but you should be careful not to go around accepting the ones that aren’t inevitable.
When I spoke of “not dealing with it”, I didn’t mean to say that you do this with people who die and aren’t signed up for cryonics. (I had already read and was very moved by your piece on Yehuda.) When someone does get frozen, though, it’s easy to categorize them as “maybe not dead”—since if a frozen person weren’t maybe-not-dead, no one would be frozen.
Alicorn, not everything that is less than absolutely awful to believe, is therefore false. In the end, either the information is there in the brain or not, and that’s a question of neuroscience and the limits of possible revival tech; that’s not something which can be possibly settled by observing which answers are comforting or discomforting.
I’m obviously not being very clear. I’m not making a case that it’s irrational to sign up for cryonics—I’m just saying it’s not appropriate for someone with a very high risk-aversion, such as myself. I’m informed by the same person who taught me about levels of risk aversion in the first place that no given level of risk aversion is necessarily irrational or irrational; it’s just a personal characteristic. It’s quite possible that by making these choices you’ll be around, enjoying a great quality of life, in four thousand years, and I won’t. That would be awesome for you and less awesome for me. I’m just not willing to take the bet.
Describing this as being averse to risks doesn’t make much sense to me. Couldn’t a pro-cryonics person equally well justify her decision as being motivated by risk aversion? By choosing not to be preserved in the event of death, you risk missing out on futures that are worth living in. If you want to take this into bizarre and unlikely science fiction ideas, as with your dystopian cannon fodder speculation, you could easily construct nightmare scenarios where cryonics is the better choice. Simply declaring yourself to have “high risk aversion” doesn’t really support one side over the other here.
This reminds me of a similar trope concerning wills: someone could avoid even thinking about setting up a will, because that would be “tempting fate,” or take the opposite position: that not having a will is tempting fate, and makes it dramatically more likely that you’ll get hit by a bus the next day. Of course, neither side there is very reasonable.
I call it risk aversion because if cryonics works at all, it ups the stakes. The money dropped on signing up for it is a sure thing, so it doesn’t factor into risk, and if I get frozen and just stay dead indefinitely (for whatever reason) then all I’ve lost compared to not signing up is that money and possibly some psychological closure for my loved ones. But the scenarios in which cryonics results in me being around for longer—possibly indefinitely—are ones which could be very extreme, in either direction. I’m not comfortable with such extreme stakes: I prefer everything I have to deal with to be within my finite lifespan, in the absence of having a near-certainty about a longer lifespan being awesome.
I don’t doubt that there are some “nightmare” situations in which I’d prefer cryonics—I’d rather be frozen than spend the next seventy years being tortured, for example—but I don’t live in one of those situations.
That’s starting to sound like a general argument for shorter lifetimes over longer ones. Is there a reason this wouldn’t apply just as well to living for five more years versus fifty? There’s more room for extreme positive or negative experiences in the extra 45 years.
Not at all—I’d take straight up immortality, if somebody offered, although I’d rather have a suicide option loophole for cases where I’m the only person to survive the heat death of the universe or something. Perhaps I unduly value the (illusion of?) control over my situation. But my reasoning is about the choice as a gamble: my risk aversion makes me prefer not to take the gamble that cryonics unambiguously is, which could go well or badly and has a cost to play.
Are you just scared of the idea of evil aliens, or do you actually think that it’s a significant risk that cryonicists recklessly ignore?
It’s not high on my list of phobias. I don’t judge the risk to be very serious. But then, the tiny risk of evil aliens isn’t opposed to a great chance of eternal bliss; it’s competing with an equally tiny chance of something very nice.
I would guess that however small the chances of being reanimated by benevolent people are, the chances of being reanimated by non-benevolent people are much smaller, just because any benevolent person with the capacity to do so cheaply will want to do so, while most non-benevolent futures I can imagine won’t bother.
Sadists exist even in the present. Unethical research programs are not unheard of in history. This is a little like saying that I shouldn’t worry about walking alone in a city at night in an area of uncertain crime rate, because if someone benevolent happens by they’ll buy me ice cream, and anyone who doesn’t wish me well will just ignore me.
But you wouldn’t choose to die rather than walk through the city, would you?
It’s hard for me to take the nightmare science fiction scenarios too seriously when the default actions comes with a well established, nonfictional nightmare: you don’t sign up for cryonics, you die, and that’s the end.
Economics are key here. What do people have to gain from taking certain actions on you/against you?
Also note that notions of “benevolence” have varied throughout the ages—and it has not been a monotonically increasing function!
There are times and places in this world when a lone drifter would have been—by default—“benevolently” enslaved by the authorities, but where this default action would change to “put to death” several decades later.
How well one is treated always depends on the economic and political power of the group you are associated with. Do our notions of lawful ownership match those of ancient civilizations? They do match in broad outlines, but in terms of specific artifacts, our notions diverge dramatically. If we somehow managed to clone Tutankhamen and recover his mind from the ether and re-implant it, what are the chances he’s going to get all of his stuff back?
I agree the chances are much smaller, but the question is what happens when you multiply by utility.
OK, you’re risk averse. Specifically, you’re scared. If you put a bit of imaginative effort into it you can play out scenarios of awakening into a dystopia, or botched revival, or abusive uploading, or various nastiness. Fair enough.
I propose that you haven’t stretched your imagination far enough.
Staying in doom-n-disaster mode, what are the other ways you could suffer? Illness, madness, brain damage, disability, mistreatment, war, famine, plague, loneliness… it just goes on and on.
Switching to happy mode, what are the good scenarios? Love, long life, wealth and good ideas to use it on… again it goes on and on.
Then if you take all those scenarios, and add a whole lot more of mediocre and tolerable and mildly downbeat ones, and you scatter them out ahead of you into an imaginary branching map of infinite reachable futures. Not all equally easy to reach. There are probability assignments on each, shifting and flowing as your actions and experiences move the chances.
This sort of visualization helps me put my own worrying into perspective. Worrying is a kind of grasping for control, but the future is too big and surprising to be pinned down that way. You can’t control what you get. You can steer into a region with more good chances than bad. To do that you have to learn to discount the low chance of bad as just the price of admission.
I’m curious what the really horrible outcomes you can imagine are? That’s not something that had ever occurred to me, I can’t imagine a worse outcome than not being revived which seems to be equivalent to just being normally dead.
This is probably symptomatic of reading too much science fiction, but I could be revived by evil aliens, or awakened into a dystopian society that didn’t have enough raw materials to make robots and wanted frozen people for cannon fodder, or I could be uploaded instead of outright defrosted and then suffer a glitch that would cause eternal torment/boredom/arithmetic problems, or some form of soul theory could turn out to be right and there could be grandiose metaphysical consequences… I have a very fertile imagination.
Perhaps you have read too much science fiction and not enough history—I worry far more about what is likely to happen between now and when I can expect to die in 30-50 years based on recent history than I do about the essentially unknowable far future.