I’m not even interested in being revived from a coma after several years, using only contemporary technology.
Me neither. I would like to write a clause that I am awoken only if a living relative feels like they need me. This may seem like a cheat, because it’s very unlikely that a child or a grandchild won’t want to revive me, but the truth is, I would be content to leave it in their hands. There is no value to my life beyond my immediate network of connections. If I am awoken in 200 years to a world that doesn’t know me, I might as well be someone else, and I don’t mind being someone else. There’s no difference between my experience of ‘I’ and the one that will develop in some number of years in a newly born baby.
Indeed. It always amazes me how successful the meme of self-sacrifice has become at persuading otherwise intelligent people to embracing even the most extreme forms of self-abnegation.
For my part, I’ll stick with enlightened self-interest as the foundation of my values and self-worth. It isn’t perfect, but at least it isn’t going to lead me into elaborate forms of suicide.
It always amazes me how successful the meme of self-sacrifice has become at persuading otherwise intelligent people to embracing even the most extreme forms of self-abnegation.
It sometimes amazes me (but only when I forget about evolutionary psychology, which easily explains it) how successful the meme of self-interest has become at persuading otherwise intelligent people that their life has more value than another’s. (Say another intelligent person’s, to head off one common rationalisation.)
Edit: This paragraph seems to have been confusing. It is somewhat facetious. To be sincere, it should say ‘[…] persuading otherwise intelligent people that it is unintelligent not to value one’s own life more than another’s.’.
at least it isn’t going to lead me into elaborate forms of suicide
I see no elaborate forms of suicide proposed here. But of course I would sacrifice my life for another’s, in some situations. (Or at least I think that I would; my evolutionary heritage may have more to say about that when the time comes.) Already I have had occasion to sacrifice my safety for another’s, but so far I’m still alive.
Actually, I’m not really an altruist. But I don’t pretend that my selfishness has a rational justification.
It sometimes amazes me (but only when I forget about evolutionary psychology, which easily explains it) how successful the meme of self-interest has become at persuading otherwise intelligent people that their life has more value than another’s. (Say another intelligent person’s, to head off one common rationalisation.)
It sometimes amazes me how often commenters on LessWrong (who really should know better if they’ve read the sequences) commit the mind projection fallacy, e.g. by assuming that “value” is a single-place function (“value(thing)”) instead of a two-place one (“value(thing, to-whom)”).
I meant for the otherwise intelligent person in question, of course. Sorry for the confusion.
I don’t think you understand me. You said:
persuading otherwise intelligent people that their life has more value than another’s
implying that it is wrong to define one person’s life as having more value than another’s. I was pointing out that this is the mind projection fallacy, because things do not have value. They only have value to someone. Thus it is perfectly sane to speak of one’s life as having more value [implied: to one’s self] than another’s.
Me neither. I would like to write a clause that I am awoken only if a living relative feels like they need me. This may seem like a cheat, because it’s very unlikely that a child or a grandchild won’t want to revive me, but the truth is, I would be content to leave it in their hands. There is no value to my life beyond my immediate network of connections. If I am awoken in 200 years to a world that doesn’t know me, I might as well be someone else, and I don’t mind being someone else. There’s no difference between my experience of ‘I’ and the one that will develop in some number of years in a newly born baby.
There is no value to my life beyond my immediate network of connections.
That is the saddest statement I have read this whole week.
Indeed. It always amazes me how successful the meme of self-sacrifice has become at persuading otherwise intelligent people to embracing even the most extreme forms of self-abnegation.
For my part, I’ll stick with enlightened self-interest as the foundation of my values and self-worth. It isn’t perfect, but at least it isn’t going to lead me into elaborate forms of suicide.
It sometimes amazes me (but only when I forget about evolutionary psychology, which easily explains it) how successful the meme of self-interest has become at persuading otherwise intelligent people that their life has more value than another’s. (Say another intelligent person’s, to head off one common rationalisation.)
Edit: This paragraph seems to have been confusing. It is somewhat facetious. To be sincere, it should say ‘[…] persuading otherwise intelligent people that it is unintelligent not to value one’s own life more than another’s.’.
I see no elaborate forms of suicide proposed here. But of course I would sacrifice my life for another’s, in some situations. (Or at least I think that I would; my evolutionary heritage may have more to say about that when the time comes.) Already I have had occasion to sacrifice my safety for another’s, but so far I’m still alive.
Actually, I’m not really an altruist. But I don’t pretend that my selfishness has a rational justification.
It sometimes amazes me how often commenters on LessWrong (who really should know better if they’ve read the sequences) commit the mind projection fallacy, e.g. by assuming that “value” is a single-place function (“value(thing)”) instead of a two-place one (“value(thing, to-whom)”).
I meant for the otherwise intelligent person in question, of course. Sorry for the confusion.
By the way, I interpreted ewbrownv’s comment in precisely the same vein.
I don’t think you understand me. You said:
implying that it is wrong to define one person’s life as having more value than another’s. I was pointing out that this is the mind projection fallacy, because things do not have value. They only have value to someone. Thus it is perfectly sane to speak of one’s life as having more value [implied: to one’s self] than another’s.
Yes, of course it is!
And it is equally sane to speak of one’s life as only having value in its relation to others.
My comment was a reply to the comment to which it was a reply; it does not make sense out of context.
Edit: I have edited the comment in question to be more clear.
What if a living relative just misses you and would like to have you around?