Costs to (potentially) eternal life

Imagine Omega came to you and said, “Cryonics will work; it will be possible for you to be resurrected and have the choice between a simulation and a new healthy body, and I can guarantee you live for at least 100,000 years after that. However, for reasons I won’t divulge, your surviving to experience this is wholly contingent upon you killing the next three people you see. I can also tell you that the next three people you see, should you fail to kill them, will die childless and will never sign up for cryonics. There is a knife on the ground behind you.”

You turn around and see someone. She says, “Wait! You shouldn’t kill me because … “

What does she say that convinces you?

[Cryonics] takes the mostly correct idea “life is good, death is bad” to such an extreme that it does violence to other valuable parts of our humanity (sorry, but I can’t be more specific).

That’s a quote from a comment in a post about cryonics. “I can’t be more specific” is not doing this comment any favors, and overall the comment was rebutted pretty well. But I did try to imagine other these other valuable parts, and I realized something that remains unresolved for me.

Guaranteed death places a limit on the value of my life to myself. Parents shield children with their bodies; Casey Jones happens more often. People run into burning buildings more often. (Suicide bombers happen more often, too, I realize.)

I think this is a valuable part of humanity, and I think that an extreme “life is good, death is bad” view does do violence to it. You can argue we should effect a world that makes this willingness unnecessary, and I’ll support that; but separate from making the willingness useless, eliminating that willingness does violence to our humanity. You can argue that our humanity is overrated and there’s something better over the horizon, i.e. the cost is worth it.

But the incentives for saving 1+X many lives at the cost of your own just got lessened. How do you put a price on heaven? orthonormal suggests that we should rely on human irrationality here to keep us moral, that thankfully we are too stupid and slow to actually change the decisions we make after recognizing the expected value of our options has changed, despite the opportunity cost of these decisions growing considerably. I think this a) underestimates humans’ ability to react to incentives and b) underestimates the reward the universe bestows on those who do react to incentives.

I don’t see a good “solution” to this problem, other than to rely on cognitive dissonance to make this not seem as offensive as it is now in the future. The people for whom this presents a problem will eventually die out, anyway, as there is a clear advantage to favor it. I guess that’s the (ultimately anticlimactic) takeaway: Morals change in the face of progress.

So, which do you favor more—your life, or identity?

EDIT: Well, it looks like this is getting fast-tracked for disappeared status. I think it’s interesting that people seem to think I’m making a statement about a moral code. I’m not; I’m talking about incentives and what would happen, not what the right thing to do is.

Let’s say Eliezer gets his wish and cryonics many, many parents sign up for cryonics and sign their children up for cryonics. Does anyone really expect that this population would not respond to its incentives to avoid more danger? Anecdotes aside; do you expect them to join the military with the same frequency, be firemen with the same frequency, to be doctors administering vaccinations in jungles with the same frequency? I don’t think it’s possible to say that with a straight face and mean it; populations respond to incentives, and the incentives just changed for that population.