Until very recently, I tended to think of cryonics as something nutty and tacitly assumed that cryonics organisations were a little shady. These weren’t strong beliefs, and I knew that I had no real basis for them, so I would never have tried to argue them to others, but they were my impressions. I blame the anti-cult and anti-scam heuristics identified in the comments by Jonathan Graehl and Pavitra.
Now that I’ve come here and seen all of you rational people into cryonics, I’ve looked at the references here and realised that my impressions were wrong. So cryonics is not terribly expensive and might well work; how interesting! And yet, I have no desire to sign up myself.
Why not? I believe that the reason is that, to spout a cliché, I’ve come to terms with death. There was a time when I found it very attractive to believe religious ideas promising immortality, but once I abandoned those as irrational, I faced the realisation that I was going to die permanently some day. That worried me for a while, but then I got used to it; I no longer desired to live forever. I didn’t even desire to live longer than about a century.
And since I no longer desire to live so long, I have no desire to sign up for cryonics. If I hadn’t been so ignorant about cryonics when I abandoned my religious hopes for immortality, then I might well have held onto that desire. So arguably, the only reason that I don’t want to live into the 4th Millennium is that I was wrong about something in the past. Nevertheless, it’s still true that I don’t particularly want to live into the 4th Millennium. So I’m glad that cryonics is reasonable, and I’m glad that the people on this site who want it are signing up for it, but it’s not something that interests me.
This must be an example of a much broader theme. One wants X but comes to the belief that X is impossible. Then one stops wanting X, which is probably a healthy response when X really is impossible. When it turns out that X is possible after all, one still does not want X.
Anyway, somebody who has gone through this process might see cyronics as threatening because it seems to attack their own rationality. It doesn’t bother me, because I know that ultimate values don’t have to be justified; I don’t want to live forever, and you do, and that’s fine on both ends. But for someone who wants to believe that their ultimate values are objectively correct, and perhaps also for someone who still wants deep down to live forever but has been suppressing this, learning that something is possible after all can be threatening.
This might be a inappropriate question, while also seeming like a rephrasing to me.
Do you foresee, that there will be a day in your future, when you will prefer to die on that day over living to see the next one?
I try to grab the FAR notion of ‘i do not want to live forever == i want to die at some point’ and make it NEAR into ‘yeah, that was a fun run, now today is the day it all shall end for me’
The desire to die seem to correlate often with the weaknesses and diseases of old age, which is a different issue than cryogenics. Age related things can be prevented to some degree, and hopefully get much more explored in the near future.
Now the bag of arguments against cryo and for dying can generally be used also to argue for suicide in old age (as seen on a StarTrek:TNG episode), or against medical treatment of those who do not wish it. But I rarely see that side argued. And to me it looks very similar.
Cryo as the very slow ambulance ride till a hospital that can treat you has been built.
Do you foresee, that there will be a day in your future, when you will prefer to die on that day over living to see the next one?
Yes, I think that this is quite possible. However, the reasons are, as you say below ‘the weaknesses and diseases of old age’, so they’re not really relevant.
I can also easily imagine that I will never want to die. I can easily imagine that, as health care improves ahead of my aging, many of the people who are alive now will live forever, and I will also. That would be fine.
But cryonics is different. Here, you are asking me to take a break of time during which technology advances far beyond what it is today, not to live into the future one day at a time. That does not interest me.
I’m not even interested in being revived from a coma after several years, using only contemporary technology. Certainly I don’t consider it worth the expense. In fact, the main reason that I don’t sign up for DNR now is that I know some people who would suffer if I did not at least outlive them (plus the bother of signing up, although at least it costs nothing).
But I think that your question may be a good one to ask other people who have come to terms with death and thereby find cryonics unappealing. Ask when, after a short or long period of apparent death, they would not want to be revived. For me, that time comes when the people that I care about are no longer around and the things that interest me are no longer current. But I can imagine that some other people would realise that the answer is never and decide to sign up.
You appear to have completely abandoned your original reason for not signing up for cryonics (that you’ve come to terms with death) in light of MartinB’s question and switched to a new reason (that you would only like to live indefinitely if your life is not interrupted by an intermission of unknown duration) without explicitly acknowledging that you have done so. This makes me somewhat suspicious of your reasoning on this issue.
For what it’s worth, I’m currently unconvinced by the arguments for signing up for cryonics but your reasoning here looks dubious to me even though I share your conclusion.
You appear to have completely abandoned your original reason for not signing up for cryonics (that you’ve come to terms with death) in light of MartinB’s question and switched to a new reason (that you would only like to live indefinitely if your life is not interrupted by an intermission of unknown duration) without explicitly acknowledging that you have done so.
I don’t see these as different reasons, but two components of a point of view that hasn’t been fully articulated. However, I share it. To accept death: once it’s over, it’s over. The things that you valued about life cannot be recovered by resuscitation 200 years later. Nevertheless, life is good. One more day, one more year, one more decade like today would be great. (If anyone can articulate this more fully, please do!)
This seems somewhat analogous to me to a situation where war or natural disaster destroys your home and kills most of your friends and family but you escape and have the opportunity to start afresh in a new and unfamiliar culture. Now that obviously sounds like a pretty unpleasant situation and vastly less preferable than if the war or disaster never occurred but I would still prefer to survive than to die. I can imagine that some people might feel differently when presented with that choice however.
byrnema agreeing with TobyBartels in wanting always to live another day, but being indifferent to his or her own cryonic suspension and revival:
To accept death: once it’s over, it’s over. The things that you valued about life cannot be recovered by resuscitation 200 years later. Nevertheless, life is good. One more day, one more year, one more decade like today would be great. (If anyone can articulate this more fully, please do!)
I, too, want always to live another day (unless my health gets so bad I no longer pay attention to anything or anyone except myself and my pain) but am indifferent to my being cryonically revived.
The way I place these two aspects of my desire into a coherent view is to note that I am useful to the world now (and tomorrow, and next year). In fact, on any given day, the way for me to maximize my usefulness to the world will almost certainly be for me to try to keep on living and to stay as healthy as possible because the source of almost all “wealth” or “usefulness” is human creativity (that is, human intelligence combined with a sincere desire to be useful to the world) and a human’s creativity ends when his or her life ends.
Now if I were to be cryonically suspended and then revived, it is almost certain that an intelligence explosion—more precisely an explosion of engineered intelligences which will be much more useful (for whatever purpose to which they are put) than any human intelligence ever was—has taken place because that is the only thing anyone can think of that would make possible my revival. But I would not be able to help engineered intelligences improved the world: with my relatively puny intelligence, I would just get in their way. Oh, sure, the machines could radically improve my intelligence, making me a “transhuman”, but from where I stand now, this strategy of continuing my usefulness into the post-human era by becoming a transhuman has very low expected utility relative to my simply leaving the post-human era up to the machines (and transhumans other than me) and a much better application of my resources is for me to try to increase the probability of a good intelligence explosion.
In other words, I see the fact that no one has yet figured out how to transform ordinary matter like you would find in a chair or a silcon wafer into an engineered intelligence superior to human intelligence to be “good news” in the sense that it gives me an opportunity to be useful. (And I am relatively indifferent to the fact that I am suffering a lot more than I would be if someone had already figured out how to make engineered intelligences.) For the same reason, if offered the chance to be teleported back in time 2000 years, I would take it (even if I could not take any of the wealth, tech or knowledge of the past 2000 years with me) because that would increase the expected usefulness of my intelligence and of my simple human life.
So that is why I am indifferent to my own cryonic suspension and revival:: even though I will probably be just as intelligent and just as interested in improving the world after my revival as I am now, it will not matter the way it matters now because my relative ability to improve the world will be less.
In other words, the way I form my wants into a coherent “explanation” or “system” is to say that I am interested in living as long as practical before the intelligence explosion, but approximately indifferent to continuing my life after it. And from that indiffernnce flow my indifference to my being cryonically suspended and revived.
Another thing: I have found that I able to take coherent actions over extended periods of time to keep on living, but I have not been able to make any non-neglible effort towards my being successfully suspended and revived. In other words, there is a sense in which the values that I have are an empirical matter not under my control. If mattnewport or someone else is able to make a reply to this comment which points out an inconsistency in the “explantion” or “system” I have given above, well, I might be chagrined (because most people do not like to admit in public to holding inconsistent goals or values) but I will probably continue to be unable to motivate myself to take effortful actions over a long period of time to maximize the probability of my successful suspension and revival.
In other words, if as a result of a debate here on Less Wrong (and out of a desire to appear consistent and stable in my goals and values, out of a desire to be accepted by the cool and wonderful people of SIAI or out of some other desire) I were to announce that I have “changed my mind” and I now believe that my being suspended and revived is the right thing for me to choose, I do not see why anyone would care all that much. My desire for my actions and choices to be consistent with my professed values will in that case probably cause me to sign up for cryonics if the cost of signing up is low, but to be honest with you, it is extremely unlikely that anything would cause me to work hard over a long period of time to ensure my suspension and revival because I just do not care and I doubt severely that I could make myself care even if I wanted to make myself care. (There are many goals and values I have unsuccessfully tried to make myself care about.)
One more thing: a lot of people (mostly those who are “motivated” and consequently able to take effortful actions over a long period of time about cryonics) will probably take what I just said as significant evidence that I cannot be trusted. I am not big on networking, so this is based on observations (from public internet messages and from face-to-face conversations) of only a handful of people, so it could be wrong.
And in turn for many years I assumed (without any good argument to back it up) signing up for cryonics (unless it is done for instrumental reasons like Eliezer did it) was significant evidence that the person should not be trusted. Yes: I took the fact that someone shared my indifferent to keeping on living for its own sake rather for the sake of keeping on being useful to the world as significant evidence that I could trust that person. I have recently abandoned that opinion (in favor of vague agnosticism) because most (all) of the extremely strong rationalists I know who had taken the trouble to inform themselves about cryonics the intelligence explosion had the opposite opinion (and because my having been confronted with that fact caused me to notice that I had no real basis for my opinion). But note people with the opposite opinion probably will not convert to agnosticism like me because among extremely strong rationalists with the necessary information to form an opinion, they are in the majority (even though of course they are in a small minority of the general population). So, maybe I am being too cautionary, but I would tend to advise Toby and byrnema, if they are planning on applying for a visiting fellowship at SIAI, FHI or such and want to be very cautious, to refrain from continuing to post about their values as they apply to cryonics.
Regarding being useful, this is something I strongly identify with. I am not a highly self-interest valuing person, though I see self-interest as itself being a useful value (in the right context). I find that I am more motivated to sign up for cryonics when I look at it as an example to set for others than when I look at it as a way to save my own skin. I am essentially more motivated to support than (directly) to adopt cryonics.
Presumably, a well written CEV optimizer would see our desire to be useful and put us in situations where we can be useful. However, I think it’s worth noting that there is quite a bit of wriggle room between the invention of fooming AGI and the development of reversal mechanisms for cryonic suspension. Reversal for cryonic suspension could end up being something that is specifically and painstakingly developed over the course of several decades by humans with normal levels of intelligence.
So far as trustworthiness is concerned, iterated prisoner’s dilemna suggests that an expectation of ongoing transactions leads to trustworthiness. So, signing up for cryonics implies being slightly more trustworthy than not.
The prisoner’s dilemma is only one part of the human landscape, and no one has argued that it will prove the most decisive part. Well, Robert Wright comes close by having chosen to write an entire book on the iterated prisoner’s dilemma, but that book’s analysis of the causes of increasing human wealth completely neglects the wealth-increasing potential of an explosion of engineered intelligence.
I can make the counterargument that the more resources a person needs to fulfill his desires, the more likely the person is to impose harm on a fellow human being to acquire those resources and that for example I have little to fear from the proverbial Buddhist mystic who is perfectly happy to sit under a tree day after day while his body decays right in front of him. My simple counterargument implies that everything else being equal, I have more to fear from the one who desires cryonic suspension and revival than the one who does not.
But the more important point is that human trustworthiness is a very tricky subject. For example, your simple argument fails to explain why old people and others who know they are near the end of their lives are less likely to steal, defraud or cheat than young healthy people are.
I do not claim to know which of the groups under discussion is more trustworthy. (My only reason for advancing my counterargument was to reduce confidence in the insightfulness or your argument.) I am just passing along my tentative belief obtained from discussions with and reading of other singularitarians and cryonicists and obtained from observation of the evolution of my own beliefs about human trustworthiness that people tend to think they know. They think it is the same group they belong to.
You appear to have completely abandoned your original reason
I was answering the question that Martin asked. I stand by my old reason for not signing up.
Actually, it’s not so much that I have a reason for not signing up, as that I have no reason for signing up. So in my original post, I addressed what seemed to be the obvious reason for signing up: that one would hold long life of value in itself, which I don’t. Then Martin suggested another reason (that on any given day, I would want to live another day), so I addressed that one.
If you were signing up for a health-insurance program which included coverage for cryonics by default, along with other available treatments for severe injuries, would you opt out of that part of the coverage, and ask to be embalmed or cremated rather than frozen? What if it cost extra to do so?
Probably not. I wouldn’t seek out such a plan, and the way things are now, such a plan would cost far more than other plans, so I wouldn’t buy it. But things may be different in the future.
What I mean is, if the plan which otherwise provided all the benefits you wanted for the least cost also included cryonics (as some sort of silly package deal, due to market forces otherwise beyond your understanding) how much would it be worth to you to have the opportunity to randomly get hit by a bus someday and not wake up at all?
Not much, and possibly a negative amount (meaning that I’d prefer the cyronics coverage); I’ll have to think about it when the time comes.
Really, a lot depends on whether my relatives and friends have also signed up for cyronics. If the situation you describe ever exists, it will probably only be when cryonics has become normal, in which case it’s much more likely that I will want it for myself, thanks to having friends waiting for me in the future.
Heck, getting involved in Less Wrong meet-ups might be enough! I find that hard to predict (and unlikely to be tested soon, given where I live and how full my social life is now).
I no longer desired to live forever. I didn’t even desire to live longer than about a century.
And then when Martin asked if you foresee a day in the future where you would prefer to die than live another day you said:
I can also easily imagine that I will never want to die. I can easily imagine that, as health care improves ahead of my aging, many of the people who are alive now will live forever, and I will also. That would be fine.
Which suggests that you either do now desire to live forever or are at least comfortable with the idea of doing so. It looked to me like you changed your mind on the question of whether you would actually want to live forever after all but maybe this was a misinterpretation of your position.
You almost seem to be viewing the question of whether you value a long life as fundamentally different to the question of whether you would want to continue living on any given future day. This seems bizarre to me.
I no longer desired to live forever. I didn’t even desire to live longer than about a century.
That’s just part of my history. I carefully put it in the past tense.
Then I wrote a paragraph saying that I no longer had any particular opinion as to how long I should live, that I would just see it day by day. Actually, the paragraph covered more than that, including how I transitioned from a feeling that a century was about right to the idea that it was silly to judge such things. But then, on proofreading my original post, I cut that paragraph. So now my original post reads
[…] I didn’t even desire to live longer than about a century.
And since I no longer desire to live so long, […]
The transition from past tense to present tense is not very clear there, for which I apologise.
But currently I have no particular desire about my length of life. I could make a prediction, based on what is likely to happen in the future and what I am likely to want, as to whether I will always want to live a bit longer, and if I predict that I will, then I could say now that I want to live forever. But signing up for cryonics now would not help me achieve any of the wishes that I anticipate having in the future, because that’s not how I’ll want to live longer. (And if this prediction is wrong, then I can sign up later.)
You almost seem to be viewing the question of whether you value a long life as fundamentally different to the question of whether you would want to continue living on any given future day. This seems bizarre to me.
In that case, taboo wanting to live forever. For some people, that seems to be a value for its own sake; I think that it was for me once. But now I’m rational like you, and I only want to live forever if I’ll forever want to live. So the only question is whether I want, assuming that I get hit by a bus today, to wake up a hundred years later. And I don’t particularly.
But once upon a time, I really wanted to live forever, because I liked the idea of living forever. In holding this idea, wasn’t thinking about whether some day I would like to die; it was, if not a terminal value in its own right, something close to that. Furthermore, death was scary and unknown, and I was taught about Heaven and Hell; even after I realised that this was a fairy tale, I harbored an idea that death was bad in and of itself. There are probably good evolutionary reasons why somebody would feel this way.
Once I was cured of all that, however, anything that might have made cryonics inviting was gone. That was the point of my original post.
The only question is whether I want, assuming that I get hit by a bus today, to wake up a hundred years later. And I don’t.
This is a really pithy and compelling way of putting this. I definitely have, at a gut level, a desire to wake up tomorrow. But I don’t even have at that same gut level a desire to come out of a coma 20 years from now. Cryonics presses my survival instinct even more gently.
(Edit: I see that Bartels made the coma analogy a few comments up. Excuse the redundancy, or take it for emphasis.)
I find it strange how society at large frowns upon cryo, while also not doing a serious effort to prolong the healthy lifespan (wallbangerific). But on the other side frowns upon suicide.
I also usually avoid the topic due to its iffyness, and i am not signed up myself yet, so its basically armchairing anyway.
I think Matt got a point. And of course if you go into the search for your real reasons all kinds of bad things might happen for you.
But what jumped me, was that a long lifespan is fine, while a long lifespan with a coma/pause in the middle is not. I dont get that.
Of course cryo people would love to take their loved ones with them, and are horrified when they ignore the chance.
while also not doing a serious effort to prolong the healthy lifespan (wallbangerific)
I agree with that! I’m interested in the work by Aubrey de Grey. It’s not useful to me now, but I predict that someday it will be.
But what jumped me, was that a long lifespan is fine, while a long lifespan with a coma/pause in the middle is not. I dont get that.
Well, I don’t suppose that there are many people who feel that way. If you can get across the idea that cryonics is a way of turning one’s death into a very long coma, then that may help make it more attractive.
But I get up in the morning because there are things that I left unfinished the day before. By the time that I am revived from cryonics, they will all be finished.
Of course cryo people would love to take their loved ones with them, and are horrified when they ignore the chance.
If my loved ones signed up for cryonics, that would be reason enough for me.
Well, I don’t suppose that there are many people who feel that way. If you can get across the idea that cryonics is a way of turning one’s death into a very long coma, then that may help make it more attractive.
Yes. Exploring how people would feel about a very long coma could be a good way of exploring how they feel about cryonics-minus-the-creep-factor. In other words, if they didn’t have the psychological obstacles centered around cryonics, how would they really feel about it?
It is a horrendous case of a sub-optimal equilibrium in a coordination game. You know, one of the examples of game theory that isn’t the @#%@ Prisoner’s Dilemma.
I’ve been spending so much of my social time among people who treat swearing with complete nonchalance that I had forgotten how much power even a censored swearword can have in a setting where it is never used.
Yes, I’m wrong about this being prisoner’s dilemma. One side defecting (dieing) against other cooperating (cryopreserving) won’t make first side better off and second one worse off.
So it’s just insufficient communication/coordination.
We could also consider a game that was perhaps a step closer to the original. Leave cooperation and defection (and their PD laden connotations) behind and just consider a pair who would sincerely rather die than live on without the other. This is plausible even without trawling Shakespeare for fictional examples. Here on lesswrong I have seen the prospect of living on via cryopreservation without a friend joining them described as ‘extrovert hell’. Then, as you say, a ‘cremation’ equilibrium would be the result of insufficient communication/coordination. A high time and or emotional cost to transitioning preferences would contribute to that kind of undesirable outcome.
Incidentally, if we were to consider pair-cryo selection as an actual PD the most obvious scenario seems to be one in which both parties are overwhelmingly spiteful. Cryopreservation is the defect option. Where life is preferred but it is far more important to ensure that the other dies.
If my loved ones signed up for cryonics, that would be reason enough for me.
What a horrendous case of prisoner’s dilemma...
Not really. If any of my loved ones were at all interested in cryonics, then we could discuss it and choose to sign up together. In the Prisoner’s Dilemma, you don’t know what your counterpart is doing.
I also usually avoid the topic due to its iffyness, and i am not signed up myself yet, so its basically armchairing anyway.
I am starting to wonder if there needs to be more of a recognized social niche for cryo supporters who aren’t signed up themselves (or whose arrangements simply are not public).
My niche is young people with little money living in Europe.
To sign up I need to a) make money—which will happen soon b) figure out how the necessary arrangements for germany regarding transport, legal and what not, c) get the paperwork.
After the writing the earlier comment I got another reality-shock about how stupid it is that society at large doesnt jump on the longevity issue. Way worse than smoking....
I’m not even interested in being revived from a coma after several years, using only contemporary technology.
Me neither. I would like to write a clause that I am awoken only if a living relative feels like they need me. This may seem like a cheat, because it’s very unlikely that a child or a grandchild won’t want to revive me, but the truth is, I would be content to leave it in their hands. There is no value to my life beyond my immediate network of connections. If I am awoken in 200 years to a world that doesn’t know me, I might as well be someone else, and I don’t mind being someone else. There’s no difference between my experience of ‘I’ and the one that will develop in some number of years in a newly born baby.
Indeed. It always amazes me how successful the meme of self-sacrifice has become at persuading otherwise intelligent people to embracing even the most extreme forms of self-abnegation.
For my part, I’ll stick with enlightened self-interest as the foundation of my values and self-worth. It isn’t perfect, but at least it isn’t going to lead me into elaborate forms of suicide.
It always amazes me how successful the meme of self-sacrifice has become at persuading otherwise intelligent people to embracing even the most extreme forms of self-abnegation.
It sometimes amazes me (but only when I forget about evolutionary psychology, which easily explains it) how successful the meme of self-interest has become at persuading otherwise intelligent people that their life has more value than another’s. (Say another intelligent person’s, to head off one common rationalisation.)
Edit: This paragraph seems to have been confusing. It is somewhat facetious. To be sincere, it should say ‘[…] persuading otherwise intelligent people that it is unintelligent not to value one’s own life more than another’s.’.
at least it isn’t going to lead me into elaborate forms of suicide
I see no elaborate forms of suicide proposed here. But of course I would sacrifice my life for another’s, in some situations. (Or at least I think that I would; my evolutionary heritage may have more to say about that when the time comes.) Already I have had occasion to sacrifice my safety for another’s, but so far I’m still alive.
Actually, I’m not really an altruist. But I don’t pretend that my selfishness has a rational justification.
It sometimes amazes me (but only when I forget about evolutionary psychology, which easily explains it) how successful the meme of self-interest has become at persuading otherwise intelligent people that their life has more value than another’s. (Say another intelligent person’s, to head off one common rationalisation.)
It sometimes amazes me how often commenters on LessWrong (who really should know better if they’ve read the sequences) commit the mind projection fallacy, e.g. by assuming that “value” is a single-place function (“value(thing)”) instead of a two-place one (“value(thing, to-whom)”).
I meant for the otherwise intelligent person in question, of course. Sorry for the confusion.
I don’t think you understand me. You said:
persuading otherwise intelligent people that their life has more value than another’s
implying that it is wrong to define one person’s life as having more value than another’s. I was pointing out that this is the mind projection fallacy, because things do not have value. They only have value to someone. Thus it is perfectly sane to speak of one’s life as having more value [implied: to one’s self] than another’s.
For me, that time comes when the people that I care about are no longer around and the things that interest me are no longer current.
As long as the internet is around, you will be able to find people with your interests. It doesn’t matter how outdated they are.
Besides, why not sign up for cryonics on the off chance that you will like the future? You can always change your mind. Unless they outlaw suicide, and can effectively stop it, in the future. Which doesn’t seem that unlikely considering we’re assuming they’re willing and able to revive your body just because they can.
Besides, why not sign up for cryonics on the off chance that you will like the future?
Because it costs thousands of dollars (a price which reflects its cost in resources). For me, that’s a large amount of money. I don’t spend it on off chances.
I’m not even interested in being revived from a coma after several years, using only contemporary technology. Certainly I don’t consider it worth the expense [writes TobyBartels].
byrnema agrees in a sibling to this comment, and I agree, too.
ADDED. Sewing-Machine agrees too though he refers to a 20-year coma rather than a coma of several years.
You appear to have completely abandoned your original reason for not signing up for cryonics (that you’ve come to terms with death) in the light of MartinB’s question and switched to a new reason (that you would only like to live indefinitely if your life is not interrupted by an intermission of unknown duration) without explicitly acknowledging that you have done so. This makes me somewhat suspicious of your reasoning on this issue.
For what it’s worth, I’m currently unconvinced by the arguments for signing up for cryonics but your reasoning here looks dubious to me.
This must be an example of a much broader theme. One wants X but comes to the belief that X is impossible. Then one stops wanting X, which is probably a healthy response when X really is impossible. When it turns out that X is possible after all, one still does not want X.
You could call it “digesting sour grapes”, perhaps.
You could call it “digesting sour grapes”, perhaps.
I like that.
While Aesop’s sour grapes were a hot-headed passionate thing, this is something that develops slowly. Classical sour grapes are a hypocritical rationalisation, and a fox who suddenly realised how it could get the grapes after all would jump at the chance. But with digested sour grapes, the lack of desire is a permanent part of oneself.
Some of the other replies to my comment seem to be trying to convince me that I do really want the grapes deep down. Aesop’s fox does, and there was probably a time that I did too. But now I don’t.
I don’t see how to do that; whether I am revived from cryonic storage is not a continuous thing.
But you could ask analogous questions. Am I glad that I was born? No, why should I care? I certainly don’t think that it’s a good thing to contribute to the birth to as my people as I can, just so that they can live; I only care about people who already exist, and that goes for me as well as for anybody else.
But I think that this would be a good question to ask to a lot of other people who might find cryonics unappealing because they’ve come to terms with death. I think that a lot of people have, or think they have, accepted that they will die but are still glad that they were born. So you might ask them if they would similarly be glad to find themselves cryonically revived. But I would not.
I didn’t write that accurately. I should have said:
I only care about people who actually exist (or will exist)
I even care about people who potentially will exist in proportion to the probability that they will exist, which really should be included in the term ‘actually’.
So for example, I care that the people who become pregnant next year get good prenatal care for the sake of the children that they will bear the year after (as well as for their own sakes).
However, I don’t care whether they actually become pregnant, or (given that they do) that those children actually are born, except as this affects them and other actual people. All in all, I wish that fewer people became pregnant and fewer babies were born, for various reasons having to do with how this affects other people, although my main emphasis is that women should have the freedom to choose whether to become and remain pregnant. (So in this vein, I donate to Planned Parenthood, and once did volunteer work for them, and may do so again. This also helps with the prenatal care.)
Then is it fair to say that, all else being equal, for people who don’t currently exist, you’re indifferent between them having no life and an OK life, and you’re indifferent between them having no life and a great life, but you prefer them having a great life to an OK life?
This must be a standard problem in utilitarian theory, but I don’t know its name.
In case you haven’t read my comment introducing myself, know that my ultimate social value is freedom, a sort of utilitarian calculus where utility is freedom. So to judge whether someone should live, the main question to ask is whether they want to live. (I forgot to say in my reply to MartinB that of course I am against medical treatment of those who do not wish it.)
But those who do not exist do not wish anything. So it doesn’t matter.
If by ‘a great life’ you mean a life of great freedom, then I prefer that to the alternative life. But one can only judge what such a life actually is once the person actually exists and has wants. I support prenatal care only on the basis of a prediction about what people will want later, like wanting to be healthy.
It still doesn’t hang together mathematically, since I should simply take expected utitlity/freedom. As I also said in my introductory comment, I don’t really believe that any utilitarian calculus captures my values. I can understand decision theory once the utilities are assigned, but I don’t understand how to assign utilities in the first place.
I do say that. I care (in terms of how I actually act) about people I see, people I like, people in my extended networks, and all living people. For example, if someone had a heart attack, I would help them even if rationally, the time I spent could be converted into far more lives through optimal giving.
Sure, but my point is you probably wouldn’t use this example of “caring” as a justification in abstract philosophical debates about, e.g., the ethics of cryonics, because visual-field-dependent morality is absurd enough to make it intuitively reasonable that values you truly care about should hold up to some sort of reflection.
It’s important not to be too loose with the idea of “care in terms of how I actually act”, or you’ll end up saying you care about being near large masses or making hiccup noises. You can plausibly argue that falling and hiccups aren’t behavior in the way that helping someone with a heart attack is, but it’s not like there’s a bright dividing line.
You know the “extended mind” hypothesis that says things like calculators or search engines can in some circumstances be seen as parts of your mind? It seems like the flip side of that is an “abridged mind” hypothesis where some parts of your brain are like alien mind control lasers, except located in your skull.
Sure, but my point is you probably wouldn’t use this example of “caring” as a justification in abstract philosophical debates about, e.g., the ethics of cryonics, because visual-field-dependent morality is absurd enough to make it intuitively reasonable that values you truly care about should hold up to some sort of reflection.
Well, yes. I have a reflectively endorsed belief that being an altruist is good and proper. If I were to endorse selfishness, I would include exceptions for those categories, in increasing order of affect on my decisions.
“value is subjective and I happen to care only about whoever is currently in my field of vision”.
Because that’s not really how humans work. We care more about things right in front of us, but we don’t stop caring about someone just because they’re not in our field of vision, and we don’t necessarily start caring about anyone who is.
If the prospect of an unboundedly long life stretched ahead of you and everyone else, would you be thinking “I wish my lifespan were much shorter—perhaps less than a century”?
No, feeling that a century was about the right length was just a phase that I went through. (Although I put it in past tense, I didn’t really make that clear, sorry.)
In fact, right now, I only want to live a few more years, because that’s how long it will take to do the things that I want to do now. However, I predict that in a few more years, I’ll want to live a few years more, and so on for a while, so I plan ahead in that expectation, but that’s all. (There are also a few people that I want to outlive, for their sakes, but that reason will expire in a few decades and it would not help them if I were cryonically preserved, since they probably won’t live long enough to see my awakened.)
It’s hard to be sure about a century from now, but I predict that, given that I live for a century, I’ll want to live (possibly a few years of wanting at a time) to live for another century. So I have a long-term interest in life extension, which gives me the prospect that you described. But that’s not the same as cyronics.
What’s the big difference with cryonics - is it the time you spend frozen? How long would you have to spend frozen before you prefer death? Clearly eight hours would be OK with you, since I assume you sometimes sleep that long—so is there a cutoff period after which you would rather die than be revived?
I’ve seen people post a number of practical guidelines like this (which is new to me). Another example might be the Litany of Tarski. Someone (Eliezer?) suggested that instead of asking “Why should I pick X over Y?” one replaces it by “Should I choose X or Y?”, especially if the decision is made.
Is there some collection of practical guidelines, either as a post or a wiki category? I’ve find these measurably improve my rationality.
Someone (Eliezer?) suggested that instead of asking “Why should I pick X over Y?” one replaces it by “Should I choose X or Y?”, especially if the decision is made.
It may just be me, but it seems like Less Wrong and TV Tropes could productively merge their output, perhaps even into the same site.
They both seek to discuss interesting patterns in human thought and behavior, for the purpose of not only entertaining the reader but also giving them the ability to make more useful predictions and analyses. Though one focuses on fictional worlds while the other focuses on the real world, people at both sites would agree that there is a significant amount of overlap between these two models, to the point where it’s often productive and enlightening to study one through indirect analysis of the other.
Plus, they both have a surfeit of snappy meme names, cross-linking, and internal culture.
I’ve sometimes pondered the bizarrely high level of rationality on TV Tropes, and my guess is that it has something to do with people zooming in, thinking about details, trying to find the obvious consequences and moral implications that no one else sees, thinking in “near mode” about things that would usually be considered in “far mode”, and possibly just being made up of nerds.
There’s no way to calculate ultimate values. It’s not that I don’t want to live forever because I think that this would have a negative effect. It’s just not something that I want.
You could argue that living forever would further other of my values. In some ways, this is probably true. In the case of being restored from cryonics, however, I doubt it. I would have even less influence on that future world than I have on this, and none of the people that I care about are likely to be there.
I am rather equivocal about living forever. Maybe someday cryonics will be cheap and easy, something that everybody does (or something as cheap and easy as things that everybody does, even if people still don’t do it, perhaps for irrational reasons). My best guess is that I would sign up for it in that case. So if somebody else wants to sign me up for cryonics, handle the paperwork and make the payments, then I don’t mind, but I wouldn’t bother for my own sake.
If there is fact of the matter of what you should do, if there are moral arguments to change your mind about what you believe you should do, then there is a good chance that your current beliefs about what you should do are wrong is some way. And given that chance, you must decide under uncertainty, do an expected utility calculation, taking into account what value a given action would have if you have so and so possible values. You are not allowed to ignore the value uncertainty. Maybe it’s the main hypothesis that you don’t want to live much longer than normal, but the other hypotheses cry for help, they are not absent, and with the moral strength behind their claims you can’t pretend they are not there. Maybe they lose a decision, but they still contribute to its expected moral value, which is therefore not indifference.
If there is fact of the matter of what you should do
But of course there isn’t.
What you’ve said is fine theoretically, but it’s meaningless unless I have some ultimate values that drive everything else. (You saw some of them on the introductions thread.) I do not value long life for its own sake, and I do not see how being cryogenically revived will further any of the values that I actually hold. In particular, I don’t see how it will enhance the freedom of other people, and I don’t see how it’s at all relevant to any of my personal goals, which will all be obsolete.
If you have a suggestion as to why it would be a good thing for me to sign up for cryonics, that maybe the future will be in dire need of people with my old-fashioned ideas and historical knowledge, or that you have fallen in love with over the Internet and will miss me when I’m gone, or anything more reasonable but which I can’t think of myself, then please say it. Otherwise I have no reason to do it.
If there is fact of the matter of what you should do
But of course there isn’t.
That you shouldn’t care about living for a very long time is exactly a claim about this fact. If there is no fact of the matter of what you should do, you can’t claim that the thing you should do is to not care.
If you have a suggestion as to why it would be a good thing for me to sign up for cryonics
As often, I argue with a faulty argument, but not about its conclusion.
It’s enough that I don’t care about living for a very long time.
And if I say that you do, what is the criterion for telling which statement is the correct one? That criterion is what I referred to as the fact of the matter about what you (should) care about. And if there is a fact, there is possibility of being wrong about it.
Unless by “not caring about X” you by definition mean that there are statements being pronounced like “I don’t care about X”, or certain chemicals being released in your brain, you’d have to settle for not having absolutely privileged knowledge about what you actually care about.
As long as we can agree that whether someone cares about X is an empirically discoverable fact, then there seem to be two currently-possible methods of finding out what that is: introspection and viewing their actions (“revealed preference”).
There is no amount of evidence about facts external to a person that could possibly bear on whether they care about X. You might change whether they care about X by presenting external input, but that’s a rather different thing.
what I referred to as the fact of the matter about what you (should) care about
You mentioned the fact of the matter of what I should do. I would hold the fact of the matter of what I should care about in the same contempt. As for the fact of the matter of what I care about, you don’t know what you’re talking about.
The only reason why I’m replying at all is that this is a site dedicated to cognitive biases, and maybe you will cite an interesting post here about how I might be horribly wrong about what I care about. Of course I could be horribly wrong about my intermediate values, but the calculation is not coming out that way.
I think an issue here may be that your statement of caring or not caring about something doesn’t carry much weight when you not only are personally unfamiliar with X, but so is everyone else who ever lived.
You can truthfully state that you aren’t terribly attracted by what you imagine living a second life in Futurama is like; but your mental picture of it is likely to bear very very very little resemblance to what a potential second life will actually feel like. Since cryonics offers a small chance of giving you an actual future life, you should evaluate on the basis of that and therefore pay little attention to what your hopelessly flawed imagination suggests.
It’s like stating that you care/don’t care for a particular hallucinogen without having ever tried it, or any substance similar to it before, or having ever read reports by people who actually tried it. You don’t have a sufficient basis to make your model of it, about which you state a claim of caring/not caring, at all meaningful.
Cryonics costs time and money. The burden of proof is there. If I’m completely ignorant of what it would be like, then I will not spend any effort to bring it about. Byrnema said it well; let the resources be expended on a baby rather than on me.
I would kind of like to try LSD, because I know something about that, and what I’ve heard is mostly positive, when following certain guidelines. A random unknown hallucinogen would pose a danger of long-term health effects. So let hallucinogen X be one whose safety is guaranteed but whose other effects I’m completely ignorant of. Then I don’t care that I’ve never tried hallucinogen X. I am not going to spend time and money seeking it out.
It’s enough that I don’t care about living for a very long time.
And if I say that you do, what is the criterion for telling which statement is the correct one? That criterion is what I referred to as the fact of the matter about what you (should) care about.
This seems wrong to me. That Toby should X does not imply that Toby does X, so determining what Toby should want does not settle whether Toby in fact wants it.
you can’t claim that the thing you should do is to not care.
Toby does not seem to be making that claim, though perhaps implicitly so. (Much like it could be argued that “X” implies “I believe that X”, it could be argued that “I did X” implies “I should have done X”. But that fails on common usage, where “I did X but I should not have done X” is ordinary.)
Until very recently, I tended to think of cryonics as something nutty and tacitly assumed that cryonics organisations were a little shady. These weren’t strong beliefs, and I knew that I had no real basis for them, so I would never have tried to argue them to others, but they were my impressions. I blame the anti-cult and anti-scam heuristics identified in the comments by Jonathan Graehl and Pavitra.
Now that I’ve come here and seen all of you rational people into cryonics, I’ve looked at the references here and realised that my impressions were wrong. So cryonics is not terribly expensive and might well work; how interesting! And yet, I have no desire to sign up myself.
Why not? I believe that the reason is that, to spout a cliché, I’ve come to terms with death. There was a time when I found it very attractive to believe religious ideas promising immortality, but once I abandoned those as irrational, I faced the realisation that I was going to die permanently some day. That worried me for a while, but then I got used to it; I no longer desired to live forever. I didn’t even desire to live longer than about a century.
And since I no longer desire to live so long, I have no desire to sign up for cryonics. If I hadn’t been so ignorant about cryonics when I abandoned my religious hopes for immortality, then I might well have held onto that desire. So arguably, the only reason that I don’t want to live into the 4th Millennium is that I was wrong about something in the past. Nevertheless, it’s still true that I don’t particularly want to live into the 4th Millennium. So I’m glad that cryonics is reasonable, and I’m glad that the people on this site who want it are signing up for it, but it’s not something that interests me.
This must be an example of a much broader theme. One wants X but comes to the belief that X is impossible. Then one stops wanting X, which is probably a healthy response when X really is impossible. When it turns out that X is possible after all, one still does not want X.
Anyway, somebody who has gone through this process might see cyronics as threatening because it seems to attack their own rationality. It doesn’t bother me, because I know that ultimate values don’t have to be justified; I don’t want to live forever, and you do, and that’s fine on both ends. But for someone who wants to believe that their ultimate values are objectively correct, and perhaps also for someone who still wants deep down to live forever but has been suppressing this, learning that something is possible after all can be threatening.
This might be a inappropriate question, while also seeming like a rephrasing to me.
Do you foresee, that there will be a day in your future, when you will prefer to die on that day over living to see the next one?
I try to grab the FAR notion of ‘i do not want to live forever == i want to die at some point’ and make it NEAR into ‘yeah, that was a fun run, now today is the day it all shall end for me’
The desire to die seem to correlate often with the weaknesses and diseases of old age, which is a different issue than cryogenics. Age related things can be prevented to some degree, and hopefully get much more explored in the near future.
Now the bag of arguments against cryo and for dying can generally be used also to argue for suicide in old age (as seen on a StarTrek:TNG episode), or against medical treatment of those who do not wish it. But I rarely see that side argued. And to me it looks very similar. Cryo as the very slow ambulance ride till a hospital that can treat you has been built.
Yes, I think that this is quite possible. However, the reasons are, as you say below ‘the weaknesses and diseases of old age’, so they’re not really relevant.
I can also easily imagine that I will never want to die. I can easily imagine that, as health care improves ahead of my aging, many of the people who are alive now will live forever, and I will also. That would be fine.
But cryonics is different. Here, you are asking me to take a break of time during which technology advances far beyond what it is today, not to live into the future one day at a time. That does not interest me.
I’m not even interested in being revived from a coma after several years, using only contemporary technology. Certainly I don’t consider it worth the expense. In fact, the main reason that I don’t sign up for DNR now is that I know some people who would suffer if I did not at least outlive them (plus the bother of signing up, although at least it costs nothing).
But I think that your question may be a good one to ask other people who have come to terms with death and thereby find cryonics unappealing. Ask when, after a short or long period of apparent death, they would not want to be revived. For me, that time comes when the people that I care about are no longer around and the things that interest me are no longer current. But I can imagine that some other people would realise that the answer is never and decide to sign up.
You appear to have completely abandoned your original reason for not signing up for cryonics (that you’ve come to terms with death) in light of MartinB’s question and switched to a new reason (that you would only like to live indefinitely if your life is not interrupted by an intermission of unknown duration) without explicitly acknowledging that you have done so. This makes me somewhat suspicious of your reasoning on this issue.
For what it’s worth, I’m currently unconvinced by the arguments for signing up for cryonics but your reasoning here looks dubious to me even though I share your conclusion.
I don’t see these as different reasons, but two components of a point of view that hasn’t been fully articulated. However, I share it. To accept death: once it’s over, it’s over. The things that you valued about life cannot be recovered by resuscitation 200 years later. Nevertheless, life is good. One more day, one more year, one more decade like today would be great. (If anyone can articulate this more fully, please do!)
This seems somewhat analogous to me to a situation where war or natural disaster destroys your home and kills most of your friends and family but you escape and have the opportunity to start afresh in a new and unfamiliar culture. Now that obviously sounds like a pretty unpleasant situation and vastly less preferable than if the war or disaster never occurred but I would still prefer to survive than to die. I can imagine that some people might feel differently when presented with that choice however.
byrnema agreeing with TobyBartels in wanting always to live another day, but being indifferent to his or her own cryonic suspension and revival:
I, too, want always to live another day (unless my health gets so bad I no longer pay attention to anything or anyone except myself and my pain) but am indifferent to my being cryonically revived.
The way I place these two aspects of my desire into a coherent view is to note that I am useful to the world now (and tomorrow, and next year). In fact, on any given day, the way for me to maximize my usefulness to the world will almost certainly be for me to try to keep on living and to stay as healthy as possible because the source of almost all “wealth” or “usefulness” is human creativity (that is, human intelligence combined with a sincere desire to be useful to the world) and a human’s creativity ends when his or her life ends.
Now if I were to be cryonically suspended and then revived, it is almost certain that an intelligence explosion—more precisely an explosion of engineered intelligences which will be much more useful (for whatever purpose to which they are put) than any human intelligence ever was—has taken place because that is the only thing anyone can think of that would make possible my revival. But I would not be able to help engineered intelligences improved the world: with my relatively puny intelligence, I would just get in their way. Oh, sure, the machines could radically improve my intelligence, making me a “transhuman”, but from where I stand now, this strategy of continuing my usefulness into the post-human era by becoming a transhuman has very low expected utility relative to my simply leaving the post-human era up to the machines (and transhumans other than me) and a much better application of my resources is for me to try to increase the probability of a good intelligence explosion.
In other words, I see the fact that no one has yet figured out how to transform ordinary matter like you would find in a chair or a silcon wafer into an engineered intelligence superior to human intelligence to be “good news” in the sense that it gives me an opportunity to be useful. (And I am relatively indifferent to the fact that I am suffering a lot more than I would be if someone had already figured out how to make engineered intelligences.) For the same reason, if offered the chance to be teleported back in time 2000 years, I would take it (even if I could not take any of the wealth, tech or knowledge of the past 2000 years with me) because that would increase the expected usefulness of my intelligence and of my simple human life.
So that is why I am indifferent to my own cryonic suspension and revival:: even though I will probably be just as intelligent and just as interested in improving the world after my revival as I am now, it will not matter the way it matters now because my relative ability to improve the world will be less.
In other words, the way I form my wants into a coherent “explanation” or “system” is to say that I am interested in living as long as practical before the intelligence explosion, but approximately indifferent to continuing my life after it. And from that indiffernnce flow my indifference to my being cryonically suspended and revived.
Another thing: I have found that I able to take coherent actions over extended periods of time to keep on living, but I have not been able to make any non-neglible effort towards my being successfully suspended and revived. In other words, there is a sense in which the values that I have are an empirical matter not under my control. If mattnewport or someone else is able to make a reply to this comment which points out an inconsistency in the “explantion” or “system” I have given above, well, I might be chagrined (because most people do not like to admit in public to holding inconsistent goals or values) but I will probably continue to be unable to motivate myself to take effortful actions over a long period of time to maximize the probability of my successful suspension and revival.
In other words, if as a result of a debate here on Less Wrong (and out of a desire to appear consistent and stable in my goals and values, out of a desire to be accepted by the cool and wonderful people of SIAI or out of some other desire) I were to announce that I have “changed my mind” and I now believe that my being suspended and revived is the right thing for me to choose, I do not see why anyone would care all that much. My desire for my actions and choices to be consistent with my professed values will in that case probably cause me to sign up for cryonics if the cost of signing up is low, but to be honest with you, it is extremely unlikely that anything would cause me to work hard over a long period of time to ensure my suspension and revival because I just do not care and I doubt severely that I could make myself care even if I wanted to make myself care. (There are many goals and values I have unsuccessfully tried to make myself care about.)
One more thing: a lot of people (mostly those who are “motivated” and consequently able to take effortful actions over a long period of time about cryonics) will probably take what I just said as significant evidence that I cannot be trusted. I am not big on networking, so this is based on observations (from public internet messages and from face-to-face conversations) of only a handful of people, so it could be wrong.
And in turn for many years I assumed (without any good argument to back it up) signing up for cryonics (unless it is done for instrumental reasons like Eliezer did it) was significant evidence that the person should not be trusted. Yes: I took the fact that someone shared my indifferent to keeping on living for its own sake rather for the sake of keeping on being useful to the world as significant evidence that I could trust that person. I have recently abandoned that opinion (in favor of vague agnosticism) because most (all) of the extremely strong rationalists I know who had taken the trouble to inform themselves about cryonics the intelligence explosion had the opposite opinion (and because my having been confronted with that fact caused me to notice that I had no real basis for my opinion). But note people with the opposite opinion probably will not convert to agnosticism like me because among extremely strong rationalists with the necessary information to form an opinion, they are in the majority (even though of course they are in a small minority of the general population). So, maybe I am being too cautionary, but I would tend to advise Toby and byrnema, if they are planning on applying for a visiting fellowship at SIAI, FHI or such and want to be very cautious, to refrain from continuing to post about their values as they apply to cryonics.
Regarding being useful, this is something I strongly identify with. I am not a highly self-interest valuing person, though I see self-interest as itself being a useful value (in the right context). I find that I am more motivated to sign up for cryonics when I look at it as an example to set for others than when I look at it as a way to save my own skin. I am essentially more motivated to support than (directly) to adopt cryonics.
Presumably, a well written CEV optimizer would see our desire to be useful and put us in situations where we can be useful. However, I think it’s worth noting that there is quite a bit of wriggle room between the invention of fooming AGI and the development of reversal mechanisms for cryonic suspension. Reversal for cryonic suspension could end up being something that is specifically and painstakingly developed over the course of several decades by humans with normal levels of intelligence.
So far as trustworthiness is concerned, iterated prisoner’s dilemna suggests that an expectation of ongoing transactions leads to trustworthiness. So, signing up for cryonics implies being slightly more trustworthy than not.
The prisoner’s dilemma is only one part of the human landscape, and no one has argued that it will prove the most decisive part. Well, Robert Wright comes close by having chosen to write an entire book on the iterated prisoner’s dilemma, but that book’s analysis of the causes of increasing human wealth completely neglects the wealth-increasing potential of an explosion of engineered intelligence.
I can make the counterargument that the more resources a person needs to fulfill his desires, the more likely the person is to impose harm on a fellow human being to acquire those resources and that for example I have little to fear from the proverbial Buddhist mystic who is perfectly happy to sit under a tree day after day while his body decays right in front of him. My simple counterargument implies that everything else being equal, I have more to fear from the one who desires cryonic suspension and revival than the one who does not.
But the more important point is that human trustworthiness is a very tricky subject. For example, your simple argument fails to explain why old people and others who know they are near the end of their lives are less likely to steal, defraud or cheat than young healthy people are.
I do not claim to know which of the groups under discussion is more trustworthy. (My only reason for advancing my counterargument was to reduce confidence in the insightfulness or your argument.) I am just passing along my tentative belief obtained from discussions with and reading of other singularitarians and cryonicists and obtained from observation of the evolution of my own beliefs about human trustworthiness that people tend to think they know. They think it is the same group they belong to.
I was answering the question that Martin asked. I stand by my old reason for not signing up.
Actually, it’s not so much that I have a reason for not signing up, as that I have no reason for signing up. So in my original post, I addressed what seemed to be the obvious reason for signing up: that one would hold long life of value in itself, which I don’t. Then Martin suggested another reason (that on any given day, I would want to live another day), so I addressed that one.
If you were signing up for a health-insurance program which included coverage for cryonics by default, along with other available treatments for severe injuries, would you opt out of that part of the coverage, and ask to be embalmed or cremated rather than frozen? What if it cost extra to do so?
Probably not. I wouldn’t seek out such a plan, and the way things are now, such a plan would cost far more than other plans, so I wouldn’t buy it. But things may be different in the future.
What I mean is, if the plan which otherwise provided all the benefits you wanted for the least cost also included cryonics (as some sort of silly package deal, due to market forces otherwise beyond your understanding) how much would it be worth to you to have the opportunity to randomly get hit by a bus someday and not wake up at all?
Not much, and possibly a negative amount (meaning that I’d prefer the cyronics coverage); I’ll have to think about it when the time comes.
Really, a lot depends on whether my relatives and friends have also signed up for cyronics. If the situation you describe ever exists, it will probably only be when cryonics has become normal, in which case it’s much more likely that I will want it for myself, thanks to having friends waiting for me in the future.
Heck, getting involved in Less Wrong meet-ups might be enough! I find that hard to predict (and unlikely to be tested soon, given where I live and how full my social life is now).
Originally you said:
And then when Martin asked if you foresee a day in the future where you would prefer to die than live another day you said:
Which suggests that you either do now desire to live forever or are at least comfortable with the idea of doing so. It looked to me like you changed your mind on the question of whether you would actually want to live forever after all but maybe this was a misinterpretation of your position.
You almost seem to be viewing the question of whether you value a long life as fundamentally different to the question of whether you would want to continue living on any given future day. This seems bizarre to me.
That’s just part of my history. I carefully put it in the past tense.
Then I wrote a paragraph saying that I no longer had any particular opinion as to how long I should live, that I would just see it day by day. Actually, the paragraph covered more than that, including how I transitioned from a feeling that a century was about right to the idea that it was silly to judge such things. But then, on proofreading my original post, I cut that paragraph. So now my original post reads
The transition from past tense to present tense is not very clear there, for which I apologise.
But currently I have no particular desire about my length of life. I could make a prediction, based on what is likely to happen in the future and what I am likely to want, as to whether I will always want to live a bit longer, and if I predict that I will, then I could say now that I want to live forever. But signing up for cryonics now would not help me achieve any of the wishes that I anticipate having in the future, because that’s not how I’ll want to live longer. (And if this prediction is wrong, then I can sign up later.)
In that case, taboo wanting to live forever. For some people, that seems to be a value for its own sake; I think that it was for me once. But now I’m rational like you, and I only want to live forever if I’ll forever want to live. So the only question is whether I want, assuming that I get hit by a bus today, to wake up a hundred years later. And I don’t particularly.
But once upon a time, I really wanted to live forever, because I liked the idea of living forever. In holding this idea, wasn’t thinking about whether some day I would like to die; it was, if not a terminal value in its own right, something close to that. Furthermore, death was scary and unknown, and I was taught about Heaven and Hell; even after I realised that this was a fairy tale, I harbored an idea that death was bad in and of itself. There are probably good evolutionary reasons why somebody would feel this way.
Once I was cured of all that, however, anything that might have made cryonics inviting was gone. That was the point of my original post.
This is a really pithy and compelling way of putting this. I definitely have, at a gut level, a desire to wake up tomorrow. But I don’t even have at that same gut level a desire to come out of a coma 20 years from now. Cryonics presses my survival instinct even more gently.
(Edit: I see that Bartels made the coma analogy a few comments up. Excuse the redundancy, or take it for emphasis.)
Thank you for answering.
I find it strange how society at large frowns upon cryo, while also not doing a serious effort to prolong the healthy lifespan (wallbangerific). But on the other side frowns upon suicide.
I also usually avoid the topic due to its iffyness, and i am not signed up myself yet, so its basically armchairing anyway.
I think Matt got a point. And of course if you go into the search for your real reasons all kinds of bad things might happen for you.
But what jumped me, was that a long lifespan is fine, while a long lifespan with a coma/pause in the middle is not. I dont get that.
Of course cryo people would love to take their loved ones with them, and are horrified when they ignore the chance.
I agree with that! I’m interested in the work by Aubrey de Grey. It’s not useful to me now, but I predict that someday it will be.
Well, I don’t suppose that there are many people who feel that way. If you can get across the idea that cryonics is a way of turning one’s death into a very long coma, then that may help make it more attractive.
But I get up in the morning because there are things that I left unfinished the day before. By the time that I am revived from cryonics, they will all be finished.
If my loved ones signed up for cryonics, that would be reason enough for me.
Yes. Exploring how people would feel about a very long coma could be a good way of exploring how they feel about cryonics-minus-the-creep-factor. In other words, if they didn’t have the psychological obstacles centered around cryonics, how would they really feel about it?
What a horrendous case of prisoner’s dilemma...
It is a horrendous case of a sub-optimal equilibrium in a coordination game. You know, one of the examples of game theory that isn’t the @#%@ Prisoner’s Dilemma.
I’ve been spending so much of my social time among people who treat swearing with complete nonchalance that I had forgotten how much power even a censored swearword can have in a setting where it is never used.
Yes, I’m wrong about this being prisoner’s dilemma. One side defecting (dieing) against other cooperating (cryopreserving) won’t make first side better off and second one worse off.
So it’s just insufficient communication/coordination.
We could also consider a game that was perhaps a step closer to the original. Leave cooperation and defection (and their PD laden connotations) behind and just consider a pair who would sincerely rather die than live on without the other. This is plausible even without trawling Shakespeare for fictional examples. Here on lesswrong I have seen the prospect of living on via cryopreservation without a friend joining them described as ‘extrovert hell’. Then, as you say, a ‘cremation’ equilibrium would be the result of insufficient communication/coordination. A high time and or emotional cost to transitioning preferences would contribute to that kind of undesirable outcome.
Incidentally, if we were to consider pair-cryo selection as an actual PD the most obvious scenario seems to be one in which both parties are overwhelmingly spiteful. Cryopreservation is the defect option. Where life is preferred but it is far more important to ensure that the other dies.
Not really. If any of my loved ones were at all interested in cryonics, then we could discuss it and choose to sign up together. In the Prisoner’s Dilemma, you don’t know what your counterpart is doing.
I am starting to wonder if there needs to be more of a recognized social niche for cryo supporters who aren’t signed up themselves (or whose arrangements simply are not public).
My niche is young people with little money living in Europe. To sign up I need to a) make money—which will happen soon b) figure out how the necessary arrangements for germany regarding transport, legal and what not, c) get the paperwork. After the writing the earlier comment I got another reality-shock about how stupid it is that society at large doesnt jump on the longevity issue. Way worse than smoking....
Me neither. I would like to write a clause that I am awoken only if a living relative feels like they need me. This may seem like a cheat, because it’s very unlikely that a child or a grandchild won’t want to revive me, but the truth is, I would be content to leave it in their hands. There is no value to my life beyond my immediate network of connections. If I am awoken in 200 years to a world that doesn’t know me, I might as well be someone else, and I don’t mind being someone else. There’s no difference between my experience of ‘I’ and the one that will develop in some number of years in a newly born baby.
There is no value to my life beyond my immediate network of connections.
That is the saddest statement I have read this whole week.
Indeed. It always amazes me how successful the meme of self-sacrifice has become at persuading otherwise intelligent people to embracing even the most extreme forms of self-abnegation.
For my part, I’ll stick with enlightened self-interest as the foundation of my values and self-worth. It isn’t perfect, but at least it isn’t going to lead me into elaborate forms of suicide.
It sometimes amazes me (but only when I forget about evolutionary psychology, which easily explains it) how successful the meme of self-interest has become at persuading otherwise intelligent people that their life has more value than another’s. (Say another intelligent person’s, to head off one common rationalisation.)
Edit: This paragraph seems to have been confusing. It is somewhat facetious. To be sincere, it should say ‘[…] persuading otherwise intelligent people that it is unintelligent not to value one’s own life more than another’s.’.
I see no elaborate forms of suicide proposed here. But of course I would sacrifice my life for another’s, in some situations. (Or at least I think that I would; my evolutionary heritage may have more to say about that when the time comes.) Already I have had occasion to sacrifice my safety for another’s, but so far I’m still alive.
Actually, I’m not really an altruist. But I don’t pretend that my selfishness has a rational justification.
It sometimes amazes me how often commenters on LessWrong (who really should know better if they’ve read the sequences) commit the mind projection fallacy, e.g. by assuming that “value” is a single-place function (“value(thing)”) instead of a two-place one (“value(thing, to-whom)”).
I meant for the otherwise intelligent person in question, of course. Sorry for the confusion.
By the way, I interpreted ewbrownv’s comment in precisely the same vein.
I don’t think you understand me. You said:
implying that it is wrong to define one person’s life as having more value than another’s. I was pointing out that this is the mind projection fallacy, because things do not have value. They only have value to someone. Thus it is perfectly sane to speak of one’s life as having more value [implied: to one’s self] than another’s.
Yes, of course it is!
And it is equally sane to speak of one’s life as only having value in its relation to others.
My comment was a reply to the comment to which it was a reply; it does not make sense out of context.
Edit: I have edited the comment in question to be more clear.
What if a living relative just misses you and would like to have you around?
As long as the internet is around, you will be able to find people with your interests. It doesn’t matter how outdated they are.
Besides, why not sign up for cryonics on the off chance that you will like the future? You can always change your mind. Unless they outlaw suicide, and can effectively stop it, in the future. Which doesn’t seem that unlikely considering we’re assuming they’re willing and able to revive your body just because they can.
Because it costs thousands of dollars (a price which reflects its cost in resources). For me, that’s a large amount of money. I don’t spend it on off chances.
byrnema agrees in a sibling to this comment, and I agree, too.
ADDED. Sewing-Machine agrees too though he refers to a 20-year coma rather than a coma of several years.
You appear to have completely abandoned your original reason for not signing up for cryonics (that you’ve come to terms with death) in the light of MartinB’s question and switched to a new reason (that you would only like to live indefinitely if your life is not interrupted by an intermission of unknown duration) without explicitly acknowledging that you have done so. This makes me somewhat suspicious of your reasoning on this issue.
For what it’s worth, I’m currently unconvinced by the arguments for signing up for cryonics but your reasoning here looks dubious to me.
You could call it “digesting sour grapes”, perhaps.
Learned helplessness, perhaps? http://en.wikipedia.org/wiki/Learned_helplessness
I like that.
While Aesop’s sour grapes were a hot-headed passionate thing, this is something that develops slowly. Classical sour grapes are a hypocritical rationalisation, and a fox who suddenly realised how it could get the grapes after all would jump at the chance. But with digested sour grapes, the lack of desire is a permanent part of oneself.
Some of the other replies to my comment seem to be trying to convince me that I do really want the grapes deep down. Aesop’s fox does, and there was probably a time that I did too. But now I don’t.
Apply the reversal test.
I don’t see how to do that; whether I am revived from cryonic storage is not a continuous thing.
But you could ask analogous questions. Am I glad that I was born? No, why should I care? I certainly don’t think that it’s a good thing to contribute to the birth to as my people as I can, just so that they can live; I only care about people who already exist, and that goes for me as well as for anybody else.
But I think that this would be a good question to ask to a lot of other people who might find cryonics unappealing because they’ve come to terms with death. I think that a lot of people have, or think they have, accepted that they will die but are still glad that they were born. So you might ask them if they would similarly be glad to find themselves cryonically revived. But I would not.
I’m curious why people say things like:
“value is subjective and I happen to care only about people who already exist”
“value is subjective and I happen to care only about people who live in the same country as me”
“value is subjective and I happen to care only about my friends and family”
but not:
“value is subjective and I happen to care only about whoever is currently in my field of vision”.
I wrote:
I didn’t write that accurately. I should have said:
I even care about people who potentially will exist in proportion to the probability that they will exist, which really should be included in the term ‘actually’.
So for example, I care that the people who become pregnant next year get good prenatal care for the sake of the children that they will bear the year after (as well as for their own sakes).
However, I don’t care whether they actually become pregnant, or (given that they do) that those children actually are born, except as this affects them and other actual people. All in all, I wish that fewer people became pregnant and fewer babies were born, for various reasons having to do with how this affects other people, although my main emphasis is that women should have the freedom to choose whether to become and remain pregnant. (So in this vein, I donate to Planned Parenthood, and once did volunteer work for them, and may do so again. This also helps with the prenatal care.)
Then is it fair to say that, all else being equal, for people who don’t currently exist, you’re indifferent between them having no life and an OK life, and you’re indifferent between them having no life and a great life, but you prefer them having a great life to an OK life?
This must be a standard problem in utilitarian theory, but I don’t know its name.
In case you haven’t read my comment introducing myself, know that my ultimate social value is freedom, a sort of utilitarian calculus where utility is freedom. So to judge whether someone should live, the main question to ask is whether they want to live. (I forgot to say in my reply to MartinB that of course I am against medical treatment of those who do not wish it.)
But those who do not exist do not wish anything. So it doesn’t matter.
If by ‘a great life’ you mean a life of great freedom, then I prefer that to the alternative life. But one can only judge what such a life actually is once the person actually exists and has wants. I support prenatal care only on the basis of a prediction about what people will want later, like wanting to be healthy.
It still doesn’t hang together mathematically, since I should simply take expected utitlity/freedom. As I also said in my introductory comment, I don’t really believe that any utilitarian calculus captures my values. I can understand decision theory once the utilities are assigned, but I don’t understand how to assign utilities in the first place.
Pretty sure this is just the flip side of the repugnant conclusion http://en.wikipedia.org/wiki/Mere_addition_paradox, which is about whether you should care about average welfare or total welfare.
Thanks, that’s it!
I do say that. I care (in terms of how I actually act) about people I see, people I like, people in my extended networks, and all living people. For example, if someone had a heart attack, I would help them even if rationally, the time I spent could be converted into far more lives through optimal giving.
Sure, but my point is you probably wouldn’t use this example of “caring” as a justification in abstract philosophical debates about, e.g., the ethics of cryonics, because visual-field-dependent morality is absurd enough to make it intuitively reasonable that values you truly care about should hold up to some sort of reflection.
It’s important not to be too loose with the idea of “care in terms of how I actually act”, or you’ll end up saying you care about being near large masses or making hiccup noises. You can plausibly argue that falling and hiccups aren’t behavior in the way that helping someone with a heart attack is, but it’s not like there’s a bright dividing line.
You know the “extended mind” hypothesis that says things like calculators or search engines can in some circumstances be seen as parts of your mind? It seems like the flip side of that is an “abridged mind” hypothesis where some parts of your brain are like alien mind control lasers, except located in your skull.
Well, yes. I have a reflectively endorsed belief that being an altruist is good and proper. If I were to endorse selfishness, I would include exceptions for those categories, in increasing order of affect on my decisions.
If value is subjective, there’s nothing particularly odd about saying the first things but not the second. That’s just their subjective preference.
Because that’s not really how humans work. We care more about things right in front of us, but we don’t stop caring about someone just because they’re not in our field of vision, and we don’t necessarily start caring about anyone who is.
So imagine that I said “to a substantial extent”.
Sure, but there are things close enough to what I said that are true but that would have been more of a pain to write down.
If the prospect of an unboundedly long life stretched ahead of you and everyone else, would you be thinking “I wish my lifespan were much shorter—perhaps less than a century”?
No, feeling that a century was about the right length was just a phase that I went through. (Although I put it in past tense, I didn’t really make that clear, sorry.)
In fact, right now, I only want to live a few more years, because that’s how long it will take to do the things that I want to do now. However, I predict that in a few more years, I’ll want to live a few years more, and so on for a while, so I plan ahead in that expectation, but that’s all. (There are also a few people that I want to outlive, for their sakes, but that reason will expire in a few decades and it would not help them if I were cryonically preserved, since they probably won’t live long enough to see my awakened.)
It’s hard to be sure about a century from now, but I predict that, given that I live for a century, I’ll want to live (possibly a few years of wanting at a time) to live for another century. So I have a long-term interest in life extension, which gives me the prospect that you described. But that’s not the same as cyronics.
What’s the big difference with cryonics - is it the time you spend frozen? How long would you have to spend frozen before you prefer death? Clearly eight hours would be OK with you, since I assume you sometimes sleep that long—so is there a cutoff period after which you would rather die than be revived?
Somewhere between a few years and a few decades, I think.
(IANAL) I think that would be a very simple clause to include in any freezing arrangement.
The extremely small chance that cryonics will work within that time doesn’t justify the expense.
But for those who can afford more, it would be interesting to see short-term cryonics added to a health insurance plan.
I’ve seen people post a number of practical guidelines like this (which is new to me). Another example might be the Litany of Tarski. Someone (Eliezer?) suggested that instead of asking “Why should I pick X over Y?” one replaces it by “Should I choose X or Y?”, especially if the decision is made.
Is there some collection of practical guidelines, either as a post or a wiki category? I’ve find these measurably improve my rationality.
Back Up and Ask Whether, Not Why.
Well, one is, if faced with the choice between X and Y, consider the third alternative.
Sounds like Off The Table. Specifically, when it’s a defience of Sweet and Sour Grapes.
Great. I think in terms of tropes now.
It may just be me, but it seems like Less Wrong and TV Tropes could productively merge their output, perhaps even into the same site.
They both seek to discuss interesting patterns in human thought and behavior, for the purpose of not only entertaining the reader but also giving them the ability to make more useful predictions and analyses. Though one focuses on fictional worlds while the other focuses on the real world, people at both sites would agree that there is a significant amount of overlap between these two models, to the point where it’s often productive and enlightening to study one through indirect analysis of the other.
Plus, they both have a surfeit of snappy meme names, cross-linking, and internal culture.
I’ve sometimes pondered the bizarrely high level of rationality on TV Tropes, and my guess is that it has something to do with people zooming in, thinking about details, trying to find the obvious consequences and moral implications that no one else sees, thinking in “near mode” about things that would usually be considered in “far mode”, and possibly just being made up of nerds.
Cases in point:
http://tvtropes.org/pmwiki/pmwiki.php/Main/StrawVulcan
http://tvtropes.org/pmwiki/pmwiki.php/Main/FantasticAesop
See http://wiki.lesswrong.com/wiki/Shut_up_and_multiply
Remember that you can be horribly wrong about what you want.
There’s no way to calculate ultimate values. It’s not that I don’t want to live forever because I think that this would have a negative effect. It’s just not something that I want.
You could argue that living forever would further other of my values. In some ways, this is probably true. In the case of being restored from cryonics, however, I doubt it. I would have even less influence on that future world than I have on this, and none of the people that I care about are likely to be there.
I am rather equivocal about living forever. Maybe someday cryonics will be cheap and easy, something that everybody does (or something as cheap and easy as things that everybody does, even if people still don’t do it, perhaps for irrational reasons). My best guess is that I would sign up for it in that case. So if somebody else wants to sign me up for cryonics, handle the paperwork and make the payments, then I don’t mind, but I wouldn’t bother for my own sake.
If there is fact of the matter of what you should do, if there are moral arguments to change your mind about what you believe you should do, then there is a good chance that your current beliefs about what you should do are wrong is some way. And given that chance, you must decide under uncertainty, do an expected utility calculation, taking into account what value a given action would have if you have so and so possible values. You are not allowed to ignore the value uncertainty. Maybe it’s the main hypothesis that you don’t want to live much longer than normal, but the other hypotheses cry for help, they are not absent, and with the moral strength behind their claims you can’t pretend they are not there. Maybe they lose a decision, but they still contribute to its expected moral value, which is therefore not indifference.
But of course there isn’t.
What you’ve said is fine theoretically, but it’s meaningless unless I have some ultimate values that drive everything else. (You saw some of them on the introductions thread.) I do not value long life for its own sake, and I do not see how being cryogenically revived will further any of the values that I actually hold. In particular, I don’t see how it will enhance the freedom of other people, and I don’t see how it’s at all relevant to any of my personal goals, which will all be obsolete.
If you have a suggestion as to why it would be a good thing for me to sign up for cryonics, that maybe the future will be in dire need of people with my old-fashioned ideas and historical knowledge, or that you have fallen in love with over the Internet and will miss me when I’m gone, or anything more reasonable but which I can’t think of myself, then please say it. Otherwise I have no reason to do it.
That you shouldn’t care about living for a very long time is exactly a claim about this fact. If there is no fact of the matter of what you should do, you can’t claim that the thing you should do is to not care.
As often, I argue with a faulty argument, but not about its conclusion.
Where have I made that claim? It’s enough that I don’t care about living for a very long time.
And if I say that you do, what is the criterion for telling which statement is the correct one? That criterion is what I referred to as the fact of the matter about what you (should) care about. And if there is a fact, there is possibility of being wrong about it.
Unless by “not caring about X” you by definition mean that there are statements being pronounced like “I don’t care about X”, or certain chemicals being released in your brain, you’d have to settle for not having absolutely privileged knowledge about what you actually care about.
As long as we can agree that whether someone cares about X is an empirically discoverable fact, then there seem to be two currently-possible methods of finding out what that is: introspection and viewing their actions (“revealed preference”).
There is no amount of evidence about facts external to a person that could possibly bear on whether they care about X. You might change whether they care about X by presenting external input, but that’s a rather different thing.
You mentioned the fact of the matter of what I should do. I would hold the fact of the matter of what I should care about in the same contempt. As for the fact of the matter of what I care about, you don’t know what you’re talking about.
The only reason why I’m replying at all is that this is a site dedicated to cognitive biases, and maybe you will cite an interesting post here about how I might be horribly wrong about what I care about. Of course I could be horribly wrong about my intermediate values, but the calculation is not coming out that way.
I think an issue here may be that your statement of caring or not caring about something doesn’t carry much weight when you not only are personally unfamiliar with X, but so is everyone else who ever lived.
You can truthfully state that you aren’t terribly attracted by what you imagine living a second life in Futurama is like; but your mental picture of it is likely to bear very very very little resemblance to what a potential second life will actually feel like. Since cryonics offers a small chance of giving you an actual future life, you should evaluate on the basis of that and therefore pay little attention to what your hopelessly flawed imagination suggests.
It’s like stating that you care/don’t care for a particular hallucinogen without having ever tried it, or any substance similar to it before, or having ever read reports by people who actually tried it. You don’t have a sufficient basis to make your model of it, about which you state a claim of caring/not caring, at all meaningful.
Cryonics costs time and money. The burden of proof is there. If I’m completely ignorant of what it would be like, then I will not spend any effort to bring it about. Byrnema said it well; let the resources be expended on a baby rather than on me.
I would kind of like to try LSD, because I know something about that, and what I’ve heard is mostly positive, when following certain guidelines. A random unknown hallucinogen would pose a danger of long-term health effects. So let hallucinogen X be one whose safety is guaranteed but whose other effects I’m completely ignorant of. Then I don’t care that I’ve never tried hallucinogen X. I am not going to spend time and money seeking it out.
See, for example, this post (although its connection to our discussion is rather indirect).
This seems wrong to me. That Toby should X does not imply that Toby does X, so determining what Toby should want does not settle whether Toby in fact wants it.
Toby does not seem to be making that claim, though perhaps implicitly so. (Much like it could be argued that “X” implies “I believe that X”, it could be argued that “I did X” implies “I should have done X”. But that fails on common usage, where “I did X but I should not have done X” is ordinary.)