You appear to have completely abandoned your original reason for not signing up for cryonics (that you’ve come to terms with death) in light of MartinB’s question and switched to a new reason (that you would only like to live indefinitely if your life is not interrupted by an intermission of unknown duration) without explicitly acknowledging that you have done so. This makes me somewhat suspicious of your reasoning on this issue.
For what it’s worth, I’m currently unconvinced by the arguments for signing up for cryonics but your reasoning here looks dubious to me even though I share your conclusion.
You appear to have completely abandoned your original reason for not signing up for cryonics (that you’ve come to terms with death) in light of MartinB’s question and switched to a new reason (that you would only like to live indefinitely if your life is not interrupted by an intermission of unknown duration) without explicitly acknowledging that you have done so.
I don’t see these as different reasons, but two components of a point of view that hasn’t been fully articulated. However, I share it. To accept death: once it’s over, it’s over. The things that you valued about life cannot be recovered by resuscitation 200 years later. Nevertheless, life is good. One more day, one more year, one more decade like today would be great. (If anyone can articulate this more fully, please do!)
This seems somewhat analogous to me to a situation where war or natural disaster destroys your home and kills most of your friends and family but you escape and have the opportunity to start afresh in a new and unfamiliar culture. Now that obviously sounds like a pretty unpleasant situation and vastly less preferable than if the war or disaster never occurred but I would still prefer to survive than to die. I can imagine that some people might feel differently when presented with that choice however.
byrnema agreeing with TobyBartels in wanting always to live another day, but being indifferent to his or her own cryonic suspension and revival:
To accept death: once it’s over, it’s over. The things that you valued about life cannot be recovered by resuscitation 200 years later. Nevertheless, life is good. One more day, one more year, one more decade like today would be great. (If anyone can articulate this more fully, please do!)
I, too, want always to live another day (unless my health gets so bad I no longer pay attention to anything or anyone except myself and my pain) but am indifferent to my being cryonically revived.
The way I place these two aspects of my desire into a coherent view is to note that I am useful to the world now (and tomorrow, and next year). In fact, on any given day, the way for me to maximize my usefulness to the world will almost certainly be for me to try to keep on living and to stay as healthy as possible because the source of almost all “wealth” or “usefulness” is human creativity (that is, human intelligence combined with a sincere desire to be useful to the world) and a human’s creativity ends when his or her life ends.
Now if I were to be cryonically suspended and then revived, it is almost certain that an intelligence explosion—more precisely an explosion of engineered intelligences which will be much more useful (for whatever purpose to which they are put) than any human intelligence ever was—has taken place because that is the only thing anyone can think of that would make possible my revival. But I would not be able to help engineered intelligences improved the world: with my relatively puny intelligence, I would just get in their way. Oh, sure, the machines could radically improve my intelligence, making me a “transhuman”, but from where I stand now, this strategy of continuing my usefulness into the post-human era by becoming a transhuman has very low expected utility relative to my simply leaving the post-human era up to the machines (and transhumans other than me) and a much better application of my resources is for me to try to increase the probability of a good intelligence explosion.
In other words, I see the fact that no one has yet figured out how to transform ordinary matter like you would find in a chair or a silcon wafer into an engineered intelligence superior to human intelligence to be “good news” in the sense that it gives me an opportunity to be useful. (And I am relatively indifferent to the fact that I am suffering a lot more than I would be if someone had already figured out how to make engineered intelligences.) For the same reason, if offered the chance to be teleported back in time 2000 years, I would take it (even if I could not take any of the wealth, tech or knowledge of the past 2000 years with me) because that would increase the expected usefulness of my intelligence and of my simple human life.
So that is why I am indifferent to my own cryonic suspension and revival:: even though I will probably be just as intelligent and just as interested in improving the world after my revival as I am now, it will not matter the way it matters now because my relative ability to improve the world will be less.
In other words, the way I form my wants into a coherent “explanation” or “system” is to say that I am interested in living as long as practical before the intelligence explosion, but approximately indifferent to continuing my life after it. And from that indiffernnce flow my indifference to my being cryonically suspended and revived.
Another thing: I have found that I able to take coherent actions over extended periods of time to keep on living, but I have not been able to make any non-neglible effort towards my being successfully suspended and revived. In other words, there is a sense in which the values that I have are an empirical matter not under my control. If mattnewport or someone else is able to make a reply to this comment which points out an inconsistency in the “explantion” or “system” I have given above, well, I might be chagrined (because most people do not like to admit in public to holding inconsistent goals or values) but I will probably continue to be unable to motivate myself to take effortful actions over a long period of time to maximize the probability of my successful suspension and revival.
In other words, if as a result of a debate here on Less Wrong (and out of a desire to appear consistent and stable in my goals and values, out of a desire to be accepted by the cool and wonderful people of SIAI or out of some other desire) I were to announce that I have “changed my mind” and I now believe that my being suspended and revived is the right thing for me to choose, I do not see why anyone would care all that much. My desire for my actions and choices to be consistent with my professed values will in that case probably cause me to sign up for cryonics if the cost of signing up is low, but to be honest with you, it is extremely unlikely that anything would cause me to work hard over a long period of time to ensure my suspension and revival because I just do not care and I doubt severely that I could make myself care even if I wanted to make myself care. (There are many goals and values I have unsuccessfully tried to make myself care about.)
One more thing: a lot of people (mostly those who are “motivated” and consequently able to take effortful actions over a long period of time about cryonics) will probably take what I just said as significant evidence that I cannot be trusted. I am not big on networking, so this is based on observations (from public internet messages and from face-to-face conversations) of only a handful of people, so it could be wrong.
And in turn for many years I assumed (without any good argument to back it up) signing up for cryonics (unless it is done for instrumental reasons like Eliezer did it) was significant evidence that the person should not be trusted. Yes: I took the fact that someone shared my indifferent to keeping on living for its own sake rather for the sake of keeping on being useful to the world as significant evidence that I could trust that person. I have recently abandoned that opinion (in favor of vague agnosticism) because most (all) of the extremely strong rationalists I know who had taken the trouble to inform themselves about cryonics the intelligence explosion had the opposite opinion (and because my having been confronted with that fact caused me to notice that I had no real basis for my opinion). But note people with the opposite opinion probably will not convert to agnosticism like me because among extremely strong rationalists with the necessary information to form an opinion, they are in the majority (even though of course they are in a small minority of the general population). So, maybe I am being too cautionary, but I would tend to advise Toby and byrnema, if they are planning on applying for a visiting fellowship at SIAI, FHI or such and want to be very cautious, to refrain from continuing to post about their values as they apply to cryonics.
Regarding being useful, this is something I strongly identify with. I am not a highly self-interest valuing person, though I see self-interest as itself being a useful value (in the right context). I find that I am more motivated to sign up for cryonics when I look at it as an example to set for others than when I look at it as a way to save my own skin. I am essentially more motivated to support than (directly) to adopt cryonics.
Presumably, a well written CEV optimizer would see our desire to be useful and put us in situations where we can be useful. However, I think it’s worth noting that there is quite a bit of wriggle room between the invention of fooming AGI and the development of reversal mechanisms for cryonic suspension. Reversal for cryonic suspension could end up being something that is specifically and painstakingly developed over the course of several decades by humans with normal levels of intelligence.
So far as trustworthiness is concerned, iterated prisoner’s dilemna suggests that an expectation of ongoing transactions leads to trustworthiness. So, signing up for cryonics implies being slightly more trustworthy than not.
The prisoner’s dilemma is only one part of the human landscape, and no one has argued that it will prove the most decisive part. Well, Robert Wright comes close by having chosen to write an entire book on the iterated prisoner’s dilemma, but that book’s analysis of the causes of increasing human wealth completely neglects the wealth-increasing potential of an explosion of engineered intelligence.
I can make the counterargument that the more resources a person needs to fulfill his desires, the more likely the person is to impose harm on a fellow human being to acquire those resources and that for example I have little to fear from the proverbial Buddhist mystic who is perfectly happy to sit under a tree day after day while his body decays right in front of him. My simple counterargument implies that everything else being equal, I have more to fear from the one who desires cryonic suspension and revival than the one who does not.
But the more important point is that human trustworthiness is a very tricky subject. For example, your simple argument fails to explain why old people and others who know they are near the end of their lives are less likely to steal, defraud or cheat than young healthy people are.
I do not claim to know which of the groups under discussion is more trustworthy. (My only reason for advancing my counterargument was to reduce confidence in the insightfulness or your argument.) I am just passing along my tentative belief obtained from discussions with and reading of other singularitarians and cryonicists and obtained from observation of the evolution of my own beliefs about human trustworthiness that people tend to think they know. They think it is the same group they belong to.
You appear to have completely abandoned your original reason
I was answering the question that Martin asked. I stand by my old reason for not signing up.
Actually, it’s not so much that I have a reason for not signing up, as that I have no reason for signing up. So in my original post, I addressed what seemed to be the obvious reason for signing up: that one would hold long life of value in itself, which I don’t. Then Martin suggested another reason (that on any given day, I would want to live another day), so I addressed that one.
If you were signing up for a health-insurance program which included coverage for cryonics by default, along with other available treatments for severe injuries, would you opt out of that part of the coverage, and ask to be embalmed or cremated rather than frozen? What if it cost extra to do so?
Probably not. I wouldn’t seek out such a plan, and the way things are now, such a plan would cost far more than other plans, so I wouldn’t buy it. But things may be different in the future.
What I mean is, if the plan which otherwise provided all the benefits you wanted for the least cost also included cryonics (as some sort of silly package deal, due to market forces otherwise beyond your understanding) how much would it be worth to you to have the opportunity to randomly get hit by a bus someday and not wake up at all?
Not much, and possibly a negative amount (meaning that I’d prefer the cyronics coverage); I’ll have to think about it when the time comes.
Really, a lot depends on whether my relatives and friends have also signed up for cyronics. If the situation you describe ever exists, it will probably only be when cryonics has become normal, in which case it’s much more likely that I will want it for myself, thanks to having friends waiting for me in the future.
Heck, getting involved in Less Wrong meet-ups might be enough! I find that hard to predict (and unlikely to be tested soon, given where I live and how full my social life is now).
I no longer desired to live forever. I didn’t even desire to live longer than about a century.
And then when Martin asked if you foresee a day in the future where you would prefer to die than live another day you said:
I can also easily imagine that I will never want to die. I can easily imagine that, as health care improves ahead of my aging, many of the people who are alive now will live forever, and I will also. That would be fine.
Which suggests that you either do now desire to live forever or are at least comfortable with the idea of doing so. It looked to me like you changed your mind on the question of whether you would actually want to live forever after all but maybe this was a misinterpretation of your position.
You almost seem to be viewing the question of whether you value a long life as fundamentally different to the question of whether you would want to continue living on any given future day. This seems bizarre to me.
I no longer desired to live forever. I didn’t even desire to live longer than about a century.
That’s just part of my history. I carefully put it in the past tense.
Then I wrote a paragraph saying that I no longer had any particular opinion as to how long I should live, that I would just see it day by day. Actually, the paragraph covered more than that, including how I transitioned from a feeling that a century was about right to the idea that it was silly to judge such things. But then, on proofreading my original post, I cut that paragraph. So now my original post reads
[…] I didn’t even desire to live longer than about a century.
And since I no longer desire to live so long, […]
The transition from past tense to present tense is not very clear there, for which I apologise.
But currently I have no particular desire about my length of life. I could make a prediction, based on what is likely to happen in the future and what I am likely to want, as to whether I will always want to live a bit longer, and if I predict that I will, then I could say now that I want to live forever. But signing up for cryonics now would not help me achieve any of the wishes that I anticipate having in the future, because that’s not how I’ll want to live longer. (And if this prediction is wrong, then I can sign up later.)
You almost seem to be viewing the question of whether you value a long life as fundamentally different to the question of whether you would want to continue living on any given future day. This seems bizarre to me.
In that case, taboo wanting to live forever. For some people, that seems to be a value for its own sake; I think that it was for me once. But now I’m rational like you, and I only want to live forever if I’ll forever want to live. So the only question is whether I want, assuming that I get hit by a bus today, to wake up a hundred years later. And I don’t particularly.
But once upon a time, I really wanted to live forever, because I liked the idea of living forever. In holding this idea, wasn’t thinking about whether some day I would like to die; it was, if not a terminal value in its own right, something close to that. Furthermore, death was scary and unknown, and I was taught about Heaven and Hell; even after I realised that this was a fairy tale, I harbored an idea that death was bad in and of itself. There are probably good evolutionary reasons why somebody would feel this way.
Once I was cured of all that, however, anything that might have made cryonics inviting was gone. That was the point of my original post.
The only question is whether I want, assuming that I get hit by a bus today, to wake up a hundred years later. And I don’t.
This is a really pithy and compelling way of putting this. I definitely have, at a gut level, a desire to wake up tomorrow. But I don’t even have at that same gut level a desire to come out of a coma 20 years from now. Cryonics presses my survival instinct even more gently.
(Edit: I see that Bartels made the coma analogy a few comments up. Excuse the redundancy, or take it for emphasis.)
You appear to have completely abandoned your original reason for not signing up for cryonics (that you’ve come to terms with death) in light of MartinB’s question and switched to a new reason (that you would only like to live indefinitely if your life is not interrupted by an intermission of unknown duration) without explicitly acknowledging that you have done so. This makes me somewhat suspicious of your reasoning on this issue.
For what it’s worth, I’m currently unconvinced by the arguments for signing up for cryonics but your reasoning here looks dubious to me even though I share your conclusion.
I don’t see these as different reasons, but two components of a point of view that hasn’t been fully articulated. However, I share it. To accept death: once it’s over, it’s over. The things that you valued about life cannot be recovered by resuscitation 200 years later. Nevertheless, life is good. One more day, one more year, one more decade like today would be great. (If anyone can articulate this more fully, please do!)
This seems somewhat analogous to me to a situation where war or natural disaster destroys your home and kills most of your friends and family but you escape and have the opportunity to start afresh in a new and unfamiliar culture. Now that obviously sounds like a pretty unpleasant situation and vastly less preferable than if the war or disaster never occurred but I would still prefer to survive than to die. I can imagine that some people might feel differently when presented with that choice however.
byrnema agreeing with TobyBartels in wanting always to live another day, but being indifferent to his or her own cryonic suspension and revival:
I, too, want always to live another day (unless my health gets so bad I no longer pay attention to anything or anyone except myself and my pain) but am indifferent to my being cryonically revived.
The way I place these two aspects of my desire into a coherent view is to note that I am useful to the world now (and tomorrow, and next year). In fact, on any given day, the way for me to maximize my usefulness to the world will almost certainly be for me to try to keep on living and to stay as healthy as possible because the source of almost all “wealth” or “usefulness” is human creativity (that is, human intelligence combined with a sincere desire to be useful to the world) and a human’s creativity ends when his or her life ends.
Now if I were to be cryonically suspended and then revived, it is almost certain that an intelligence explosion—more precisely an explosion of engineered intelligences which will be much more useful (for whatever purpose to which they are put) than any human intelligence ever was—has taken place because that is the only thing anyone can think of that would make possible my revival. But I would not be able to help engineered intelligences improved the world: with my relatively puny intelligence, I would just get in their way. Oh, sure, the machines could radically improve my intelligence, making me a “transhuman”, but from where I stand now, this strategy of continuing my usefulness into the post-human era by becoming a transhuman has very low expected utility relative to my simply leaving the post-human era up to the machines (and transhumans other than me) and a much better application of my resources is for me to try to increase the probability of a good intelligence explosion.
In other words, I see the fact that no one has yet figured out how to transform ordinary matter like you would find in a chair or a silcon wafer into an engineered intelligence superior to human intelligence to be “good news” in the sense that it gives me an opportunity to be useful. (And I am relatively indifferent to the fact that I am suffering a lot more than I would be if someone had already figured out how to make engineered intelligences.) For the same reason, if offered the chance to be teleported back in time 2000 years, I would take it (even if I could not take any of the wealth, tech or knowledge of the past 2000 years with me) because that would increase the expected usefulness of my intelligence and of my simple human life.
So that is why I am indifferent to my own cryonic suspension and revival:: even though I will probably be just as intelligent and just as interested in improving the world after my revival as I am now, it will not matter the way it matters now because my relative ability to improve the world will be less.
In other words, the way I form my wants into a coherent “explanation” or “system” is to say that I am interested in living as long as practical before the intelligence explosion, but approximately indifferent to continuing my life after it. And from that indiffernnce flow my indifference to my being cryonically suspended and revived.
Another thing: I have found that I able to take coherent actions over extended periods of time to keep on living, but I have not been able to make any non-neglible effort towards my being successfully suspended and revived. In other words, there is a sense in which the values that I have are an empirical matter not under my control. If mattnewport or someone else is able to make a reply to this comment which points out an inconsistency in the “explantion” or “system” I have given above, well, I might be chagrined (because most people do not like to admit in public to holding inconsistent goals or values) but I will probably continue to be unable to motivate myself to take effortful actions over a long period of time to maximize the probability of my successful suspension and revival.
In other words, if as a result of a debate here on Less Wrong (and out of a desire to appear consistent and stable in my goals and values, out of a desire to be accepted by the cool and wonderful people of SIAI or out of some other desire) I were to announce that I have “changed my mind” and I now believe that my being suspended and revived is the right thing for me to choose, I do not see why anyone would care all that much. My desire for my actions and choices to be consistent with my professed values will in that case probably cause me to sign up for cryonics if the cost of signing up is low, but to be honest with you, it is extremely unlikely that anything would cause me to work hard over a long period of time to ensure my suspension and revival because I just do not care and I doubt severely that I could make myself care even if I wanted to make myself care. (There are many goals and values I have unsuccessfully tried to make myself care about.)
One more thing: a lot of people (mostly those who are “motivated” and consequently able to take effortful actions over a long period of time about cryonics) will probably take what I just said as significant evidence that I cannot be trusted. I am not big on networking, so this is based on observations (from public internet messages and from face-to-face conversations) of only a handful of people, so it could be wrong.
And in turn for many years I assumed (without any good argument to back it up) signing up for cryonics (unless it is done for instrumental reasons like Eliezer did it) was significant evidence that the person should not be trusted. Yes: I took the fact that someone shared my indifferent to keeping on living for its own sake rather for the sake of keeping on being useful to the world as significant evidence that I could trust that person. I have recently abandoned that opinion (in favor of vague agnosticism) because most (all) of the extremely strong rationalists I know who had taken the trouble to inform themselves about cryonics the intelligence explosion had the opposite opinion (and because my having been confronted with that fact caused me to notice that I had no real basis for my opinion). But note people with the opposite opinion probably will not convert to agnosticism like me because among extremely strong rationalists with the necessary information to form an opinion, they are in the majority (even though of course they are in a small minority of the general population). So, maybe I am being too cautionary, but I would tend to advise Toby and byrnema, if they are planning on applying for a visiting fellowship at SIAI, FHI or such and want to be very cautious, to refrain from continuing to post about their values as they apply to cryonics.
Regarding being useful, this is something I strongly identify with. I am not a highly self-interest valuing person, though I see self-interest as itself being a useful value (in the right context). I find that I am more motivated to sign up for cryonics when I look at it as an example to set for others than when I look at it as a way to save my own skin. I am essentially more motivated to support than (directly) to adopt cryonics.
Presumably, a well written CEV optimizer would see our desire to be useful and put us in situations where we can be useful. However, I think it’s worth noting that there is quite a bit of wriggle room between the invention of fooming AGI and the development of reversal mechanisms for cryonic suspension. Reversal for cryonic suspension could end up being something that is specifically and painstakingly developed over the course of several decades by humans with normal levels of intelligence.
So far as trustworthiness is concerned, iterated prisoner’s dilemna suggests that an expectation of ongoing transactions leads to trustworthiness. So, signing up for cryonics implies being slightly more trustworthy than not.
The prisoner’s dilemma is only one part of the human landscape, and no one has argued that it will prove the most decisive part. Well, Robert Wright comes close by having chosen to write an entire book on the iterated prisoner’s dilemma, but that book’s analysis of the causes of increasing human wealth completely neglects the wealth-increasing potential of an explosion of engineered intelligence.
I can make the counterargument that the more resources a person needs to fulfill his desires, the more likely the person is to impose harm on a fellow human being to acquire those resources and that for example I have little to fear from the proverbial Buddhist mystic who is perfectly happy to sit under a tree day after day while his body decays right in front of him. My simple counterargument implies that everything else being equal, I have more to fear from the one who desires cryonic suspension and revival than the one who does not.
But the more important point is that human trustworthiness is a very tricky subject. For example, your simple argument fails to explain why old people and others who know they are near the end of their lives are less likely to steal, defraud or cheat than young healthy people are.
I do not claim to know which of the groups under discussion is more trustworthy. (My only reason for advancing my counterargument was to reduce confidence in the insightfulness or your argument.) I am just passing along my tentative belief obtained from discussions with and reading of other singularitarians and cryonicists and obtained from observation of the evolution of my own beliefs about human trustworthiness that people tend to think they know. They think it is the same group they belong to.
I was answering the question that Martin asked. I stand by my old reason for not signing up.
Actually, it’s not so much that I have a reason for not signing up, as that I have no reason for signing up. So in my original post, I addressed what seemed to be the obvious reason for signing up: that one would hold long life of value in itself, which I don’t. Then Martin suggested another reason (that on any given day, I would want to live another day), so I addressed that one.
If you were signing up for a health-insurance program which included coverage for cryonics by default, along with other available treatments for severe injuries, would you opt out of that part of the coverage, and ask to be embalmed or cremated rather than frozen? What if it cost extra to do so?
Probably not. I wouldn’t seek out such a plan, and the way things are now, such a plan would cost far more than other plans, so I wouldn’t buy it. But things may be different in the future.
What I mean is, if the plan which otherwise provided all the benefits you wanted for the least cost also included cryonics (as some sort of silly package deal, due to market forces otherwise beyond your understanding) how much would it be worth to you to have the opportunity to randomly get hit by a bus someday and not wake up at all?
Not much, and possibly a negative amount (meaning that I’d prefer the cyronics coverage); I’ll have to think about it when the time comes.
Really, a lot depends on whether my relatives and friends have also signed up for cyronics. If the situation you describe ever exists, it will probably only be when cryonics has become normal, in which case it’s much more likely that I will want it for myself, thanks to having friends waiting for me in the future.
Heck, getting involved in Less Wrong meet-ups might be enough! I find that hard to predict (and unlikely to be tested soon, given where I live and how full my social life is now).
Originally you said:
And then when Martin asked if you foresee a day in the future where you would prefer to die than live another day you said:
Which suggests that you either do now desire to live forever or are at least comfortable with the idea of doing so. It looked to me like you changed your mind on the question of whether you would actually want to live forever after all but maybe this was a misinterpretation of your position.
You almost seem to be viewing the question of whether you value a long life as fundamentally different to the question of whether you would want to continue living on any given future day. This seems bizarre to me.
That’s just part of my history. I carefully put it in the past tense.
Then I wrote a paragraph saying that I no longer had any particular opinion as to how long I should live, that I would just see it day by day. Actually, the paragraph covered more than that, including how I transitioned from a feeling that a century was about right to the idea that it was silly to judge such things. But then, on proofreading my original post, I cut that paragraph. So now my original post reads
The transition from past tense to present tense is not very clear there, for which I apologise.
But currently I have no particular desire about my length of life. I could make a prediction, based on what is likely to happen in the future and what I am likely to want, as to whether I will always want to live a bit longer, and if I predict that I will, then I could say now that I want to live forever. But signing up for cryonics now would not help me achieve any of the wishes that I anticipate having in the future, because that’s not how I’ll want to live longer. (And if this prediction is wrong, then I can sign up later.)
In that case, taboo wanting to live forever. For some people, that seems to be a value for its own sake; I think that it was for me once. But now I’m rational like you, and I only want to live forever if I’ll forever want to live. So the only question is whether I want, assuming that I get hit by a bus today, to wake up a hundred years later. And I don’t particularly.
But once upon a time, I really wanted to live forever, because I liked the idea of living forever. In holding this idea, wasn’t thinking about whether some day I would like to die; it was, if not a terminal value in its own right, something close to that. Furthermore, death was scary and unknown, and I was taught about Heaven and Hell; even after I realised that this was a fairy tale, I harbored an idea that death was bad in and of itself. There are probably good evolutionary reasons why somebody would feel this way.
Once I was cured of all that, however, anything that might have made cryonics inviting was gone. That was the point of my original post.
This is a really pithy and compelling way of putting this. I definitely have, at a gut level, a desire to wake up tomorrow. But I don’t even have at that same gut level a desire to come out of a coma 20 years from now. Cryonics presses my survival instinct even more gently.
(Edit: I see that Bartels made the coma analogy a few comments up. Excuse the redundancy, or take it for emphasis.)