I don’t see how to do that; whether I am revived from cryonic storage is not a continuous thing.
But you could ask analogous questions. Am I glad that I was born? No, why should I care? I certainly don’t think that it’s a good thing to contribute to the birth to as my people as I can, just so that they can live; I only care about people who already exist, and that goes for me as well as for anybody else.
But I think that this would be a good question to ask to a lot of other people who might find cryonics unappealing because they’ve come to terms with death. I think that a lot of people have, or think they have, accepted that they will die but are still glad that they were born. So you might ask them if they would similarly be glad to find themselves cryonically revived. But I would not.
I didn’t write that accurately. I should have said:
I only care about people who actually exist (or will exist)
I even care about people who potentially will exist in proportion to the probability that they will exist, which really should be included in the term ‘actually’.
So for example, I care that the people who become pregnant next year get good prenatal care for the sake of the children that they will bear the year after (as well as for their own sakes).
However, I don’t care whether they actually become pregnant, or (given that they do) that those children actually are born, except as this affects them and other actual people. All in all, I wish that fewer people became pregnant and fewer babies were born, for various reasons having to do with how this affects other people, although my main emphasis is that women should have the freedom to choose whether to become and remain pregnant. (So in this vein, I donate to Planned Parenthood, and once did volunteer work for them, and may do so again. This also helps with the prenatal care.)
Then is it fair to say that, all else being equal, for people who don’t currently exist, you’re indifferent between them having no life and an OK life, and you’re indifferent between them having no life and a great life, but you prefer them having a great life to an OK life?
This must be a standard problem in utilitarian theory, but I don’t know its name.
In case you haven’t read my comment introducing myself, know that my ultimate social value is freedom, a sort of utilitarian calculus where utility is freedom. So to judge whether someone should live, the main question to ask is whether they want to live. (I forgot to say in my reply to MartinB that of course I am against medical treatment of those who do not wish it.)
But those who do not exist do not wish anything. So it doesn’t matter.
If by ‘a great life’ you mean a life of great freedom, then I prefer that to the alternative life. But one can only judge what such a life actually is once the person actually exists and has wants. I support prenatal care only on the basis of a prediction about what people will want later, like wanting to be healthy.
It still doesn’t hang together mathematically, since I should simply take expected utitlity/freedom. As I also said in my introductory comment, I don’t really believe that any utilitarian calculus captures my values. I can understand decision theory once the utilities are assigned, but I don’t understand how to assign utilities in the first place.
I do say that. I care (in terms of how I actually act) about people I see, people I like, people in my extended networks, and all living people. For example, if someone had a heart attack, I would help them even if rationally, the time I spent could be converted into far more lives through optimal giving.
Sure, but my point is you probably wouldn’t use this example of “caring” as a justification in abstract philosophical debates about, e.g., the ethics of cryonics, because visual-field-dependent morality is absurd enough to make it intuitively reasonable that values you truly care about should hold up to some sort of reflection.
It’s important not to be too loose with the idea of “care in terms of how I actually act”, or you’ll end up saying you care about being near large masses or making hiccup noises. You can plausibly argue that falling and hiccups aren’t behavior in the way that helping someone with a heart attack is, but it’s not like there’s a bright dividing line.
You know the “extended mind” hypothesis that says things like calculators or search engines can in some circumstances be seen as parts of your mind? It seems like the flip side of that is an “abridged mind” hypothesis where some parts of your brain are like alien mind control lasers, except located in your skull.
Sure, but my point is you probably wouldn’t use this example of “caring” as a justification in abstract philosophical debates about, e.g., the ethics of cryonics, because visual-field-dependent morality is absurd enough to make it intuitively reasonable that values you truly care about should hold up to some sort of reflection.
Well, yes. I have a reflectively endorsed belief that being an altruist is good and proper. If I were to endorse selfishness, I would include exceptions for those categories, in increasing order of affect on my decisions.
“value is subjective and I happen to care only about whoever is currently in my field of vision”.
Because that’s not really how humans work. We care more about things right in front of us, but we don’t stop caring about someone just because they’re not in our field of vision, and we don’t necessarily start caring about anyone who is.
If the prospect of an unboundedly long life stretched ahead of you and everyone else, would you be thinking “I wish my lifespan were much shorter—perhaps less than a century”?
No, feeling that a century was about the right length was just a phase that I went through. (Although I put it in past tense, I didn’t really make that clear, sorry.)
In fact, right now, I only want to live a few more years, because that’s how long it will take to do the things that I want to do now. However, I predict that in a few more years, I’ll want to live a few years more, and so on for a while, so I plan ahead in that expectation, but that’s all. (There are also a few people that I want to outlive, for their sakes, but that reason will expire in a few decades and it would not help them if I were cryonically preserved, since they probably won’t live long enough to see my awakened.)
It’s hard to be sure about a century from now, but I predict that, given that I live for a century, I’ll want to live (possibly a few years of wanting at a time) to live for another century. So I have a long-term interest in life extension, which gives me the prospect that you described. But that’s not the same as cyronics.
What’s the big difference with cryonics - is it the time you spend frozen? How long would you have to spend frozen before you prefer death? Clearly eight hours would be OK with you, since I assume you sometimes sleep that long—so is there a cutoff period after which you would rather die than be revived?
I’ve seen people post a number of practical guidelines like this (which is new to me). Another example might be the Litany of Tarski. Someone (Eliezer?) suggested that instead of asking “Why should I pick X over Y?” one replaces it by “Should I choose X or Y?”, especially if the decision is made.
Is there some collection of practical guidelines, either as a post or a wiki category? I’ve find these measurably improve my rationality.
Someone (Eliezer?) suggested that instead of asking “Why should I pick X over Y?” one replaces it by “Should I choose X or Y?”, especially if the decision is made.
Apply the reversal test.
I don’t see how to do that; whether I am revived from cryonic storage is not a continuous thing.
But you could ask analogous questions. Am I glad that I was born? No, why should I care? I certainly don’t think that it’s a good thing to contribute to the birth to as my people as I can, just so that they can live; I only care about people who already exist, and that goes for me as well as for anybody else.
But I think that this would be a good question to ask to a lot of other people who might find cryonics unappealing because they’ve come to terms with death. I think that a lot of people have, or think they have, accepted that they will die but are still glad that they were born. So you might ask them if they would similarly be glad to find themselves cryonically revived. But I would not.
I’m curious why people say things like:
“value is subjective and I happen to care only about people who already exist”
“value is subjective and I happen to care only about people who live in the same country as me”
“value is subjective and I happen to care only about my friends and family”
but not:
“value is subjective and I happen to care only about whoever is currently in my field of vision”.
I wrote:
I didn’t write that accurately. I should have said:
I even care about people who potentially will exist in proportion to the probability that they will exist, which really should be included in the term ‘actually’.
So for example, I care that the people who become pregnant next year get good prenatal care for the sake of the children that they will bear the year after (as well as for their own sakes).
However, I don’t care whether they actually become pregnant, or (given that they do) that those children actually are born, except as this affects them and other actual people. All in all, I wish that fewer people became pregnant and fewer babies were born, for various reasons having to do with how this affects other people, although my main emphasis is that women should have the freedom to choose whether to become and remain pregnant. (So in this vein, I donate to Planned Parenthood, and once did volunteer work for them, and may do so again. This also helps with the prenatal care.)
Then is it fair to say that, all else being equal, for people who don’t currently exist, you’re indifferent between them having no life and an OK life, and you’re indifferent between them having no life and a great life, but you prefer them having a great life to an OK life?
This must be a standard problem in utilitarian theory, but I don’t know its name.
In case you haven’t read my comment introducing myself, know that my ultimate social value is freedom, a sort of utilitarian calculus where utility is freedom. So to judge whether someone should live, the main question to ask is whether they want to live. (I forgot to say in my reply to MartinB that of course I am against medical treatment of those who do not wish it.)
But those who do not exist do not wish anything. So it doesn’t matter.
If by ‘a great life’ you mean a life of great freedom, then I prefer that to the alternative life. But one can only judge what such a life actually is once the person actually exists and has wants. I support prenatal care only on the basis of a prediction about what people will want later, like wanting to be healthy.
It still doesn’t hang together mathematically, since I should simply take expected utitlity/freedom. As I also said in my introductory comment, I don’t really believe that any utilitarian calculus captures my values. I can understand decision theory once the utilities are assigned, but I don’t understand how to assign utilities in the first place.
Pretty sure this is just the flip side of the repugnant conclusion http://en.wikipedia.org/wiki/Mere_addition_paradox, which is about whether you should care about average welfare or total welfare.
Thanks, that’s it!
I do say that. I care (in terms of how I actually act) about people I see, people I like, people in my extended networks, and all living people. For example, if someone had a heart attack, I would help them even if rationally, the time I spent could be converted into far more lives through optimal giving.
Sure, but my point is you probably wouldn’t use this example of “caring” as a justification in abstract philosophical debates about, e.g., the ethics of cryonics, because visual-field-dependent morality is absurd enough to make it intuitively reasonable that values you truly care about should hold up to some sort of reflection.
It’s important not to be too loose with the idea of “care in terms of how I actually act”, or you’ll end up saying you care about being near large masses or making hiccup noises. You can plausibly argue that falling and hiccups aren’t behavior in the way that helping someone with a heart attack is, but it’s not like there’s a bright dividing line.
You know the “extended mind” hypothesis that says things like calculators or search engines can in some circumstances be seen as parts of your mind? It seems like the flip side of that is an “abridged mind” hypothesis where some parts of your brain are like alien mind control lasers, except located in your skull.
Well, yes. I have a reflectively endorsed belief that being an altruist is good and proper. If I were to endorse selfishness, I would include exceptions for those categories, in increasing order of affect on my decisions.
If value is subjective, there’s nothing particularly odd about saying the first things but not the second. That’s just their subjective preference.
Because that’s not really how humans work. We care more about things right in front of us, but we don’t stop caring about someone just because they’re not in our field of vision, and we don’t necessarily start caring about anyone who is.
So imagine that I said “to a substantial extent”.
Sure, but there are things close enough to what I said that are true but that would have been more of a pain to write down.
If the prospect of an unboundedly long life stretched ahead of you and everyone else, would you be thinking “I wish my lifespan were much shorter—perhaps less than a century”?
No, feeling that a century was about the right length was just a phase that I went through. (Although I put it in past tense, I didn’t really make that clear, sorry.)
In fact, right now, I only want to live a few more years, because that’s how long it will take to do the things that I want to do now. However, I predict that in a few more years, I’ll want to live a few years more, and so on for a while, so I plan ahead in that expectation, but that’s all. (There are also a few people that I want to outlive, for their sakes, but that reason will expire in a few decades and it would not help them if I were cryonically preserved, since they probably won’t live long enough to see my awakened.)
It’s hard to be sure about a century from now, but I predict that, given that I live for a century, I’ll want to live (possibly a few years of wanting at a time) to live for another century. So I have a long-term interest in life extension, which gives me the prospect that you described. But that’s not the same as cyronics.
What’s the big difference with cryonics - is it the time you spend frozen? How long would you have to spend frozen before you prefer death? Clearly eight hours would be OK with you, since I assume you sometimes sleep that long—so is there a cutoff period after which you would rather die than be revived?
Somewhere between a few years and a few decades, I think.
(IANAL) I think that would be a very simple clause to include in any freezing arrangement.
The extremely small chance that cryonics will work within that time doesn’t justify the expense.
But for those who can afford more, it would be interesting to see short-term cryonics added to a health insurance plan.
I’ve seen people post a number of practical guidelines like this (which is new to me). Another example might be the Litany of Tarski. Someone (Eliezer?) suggested that instead of asking “Why should I pick X over Y?” one replaces it by “Should I choose X or Y?”, especially if the decision is made.
Is there some collection of practical guidelines, either as a post or a wiki category? I’ve find these measurably improve my rationality.
Back Up and Ask Whether, Not Why.
Well, one is, if faced with the choice between X and Y, consider the third alternative.