I’m entirely comfortable with considering a person’s “badness” as a factor in the decision of whether or not to jettison them.
So, as a thought experiment, imagine that through elven magic you get to resurrect some (but not all) people from, say, XVI-century Europe. What you have is a bunch of coffins and some of them are marked “Bad person” and some are marked “Good person”—by people who buried them 500 years ago. Are you going to take these markings into account?
Would you answer differently if the coffins came from Pharaonic Egypt? An unnamed Neolithic village?
Also, if the “bad person” is an enemy leader in a war this raises other issues, like whether it is considered military aid if anti-war activists in the US resurrect Osama bin Laden to make a point (or if they just resurrect enemy soldiers and let those soldiers go home to make war on the US).
It depends on how much I trust the judgment that so-and-so is a bad person. Obviously, most of us won’t trust the judgment of a 16th century coffin scribbler much, if for no other reason than because this person doesn’t share our values. Even if they wrote “this person is bad because he committed murder” and “this person is bad because he’s a Jew”, that would only let me discard the judgments that obviously have mismatches with my values, but not let me discard the judgments with more subtle mismatches (such as whether he thinks it’s murder for a peasant to use self-defense against a lord).
If we’re discussing society-wide policy on resurrections, then I need to decide how I trust the judgment of the people in society who do resurrections as well as the judgment of the people who inscribed the coffins. I wouldn’t trust those people’s judgment except in extreme cases, like for someone who committed a serious violent crime and was convicted of it through a reasonably fair process.
In the case of people who couldn’t be convicted, either because they died before trial, or because they are a world leader who could not be put on trial, I think I would require some process that ensures that this doesn’t randomly get applied to anyone who is disliked. Otherwise, saying you can’t resurrect Hitler opens the door to saying “Israel is committing genocide on the Palestinians, resurrecting any dead Israeli leader is like resurrecting Hitler”.
If I’m interpreting correctly Lumifer’s intention with the thought experiment, then we shouldn’t expect future societies to pay any attention to our judgment of who was a good person and who wasn’t. By freezing ourselves we’d be basically jumping into the terrifying unknown.
Given the choice between resurrecting Hitler and resurrecting and random cryonically person, who would you choose? There may be compelling reasons to choose Hitler-maybe we are in need of some information which Hitler knew-but the probability that the random person was more bad than Hitler is extremely low, a proposition I can make in the absence of a rigid definition of badness.
Nevertheless, this is an edge case-I would need a very compelling case about preservation of resources to even consider the question of who we should save, much less advocate preserving one person over another.
The Enterprise crew revived Khan without knowing he had been a war criminal in the past. The historical records on that war were incomplete and could have given them no warning of who they were reviving.
The historical records on that war were incomplete
Historical records on nearly any war are incomplete. War crimes of the winning side are seldomly documented well.
Why do you think the having been a war criminal in the past is good evidence that an individual would cause harm? It’s very unlikely that an individual who get’s revived is in a position to get a lot of political power.
You could probably make an argument to not thaw people who are still considered bad by the people doing the thawing. If you don’t have a lasting legacy then people of your time didn’t really consider you bad enough to worry about for the future.
You could probably make an argument to not thaw people who are still considered bad by the people doing the thawing.
There are two ways such an argument could go. One is that not-thawing is punishment. He was a bad guy and wasn’t punished enough during his lifetime, so now we will punish him more by not thawing him. That argument, as you imagine, has some problems.
The other way is to say that this guy is a danger (or, more generally, a net negative) to the current society. But there are problems here, too. Let’s take everyone’s favourite—Hitler. Is he really a danger to the society, say, a couple hundred years from now? Or maybe, given the lack of very unhappy Germany, he’ll settle down in some pastoral village, make speeches to the walls of his cottage and work on improving his art?
There are two ways such an argument could go. One is that not-thawing is punishment. He was a bad guy and wasn’t punished enough during his lifetime, so now we will punish him more by not thawing him. That argument, as you imagine, has some problems.
I can think of two situations where you might want to do this:
The punishment consisted of execution.
The punishment given during his lifetime was one which lasts a certain period of time and he died before that period of time was over.
You notice I made no effort to define or operationalize “badness”? The scenario in which the “bad” people aren’t reanimated is an edge case, a concession to the fact that some people are generally agreed to be “bad”, but a consistent and widely agreeable definition of badness is difficult to get right. I don’t know what it is, but If anyone does, I’m all ears!
You notice I made no effort to define or operationalize “badness”?
Yes, but the point is that people who characterize somebody as “bad” or “good” and people who decide which bodies to jettison are different people who don’t necessarily share a vocabulary, never mind a common value system.
If you don’t define “bad”, then I don’t understand what does this mean:
I’m entirely comfortable with considering a person’s “badness” as a factor in the decision of whether or not to jettison them.
Yes, but the point is that people who characterize somebody as “bad” or “good” and people who decide which bodies to jettison are different people who don’t necessarily share a vocabulary, never mind a common value system.
I disagree. If you are the decision-maker and your decision algorithm includes “badness”, its your responsibility to define (and calculate) “badness”, based on the data available to you. This is key.
It seems to me that this whole scenario is roughly analogous to the Trolley Problem, with the twist that the decision-maker is given unknown access to unknown amounts of data about the people who will live or die. In a situation of minimal information (imagine caskets identified by a randomly-assigned ID, archived by a database which had long-since been lost), the decision-maker must choose the survivor based only on the information stored within the body (e.g. DNA, presence of extant uncured diseases, etc.). Given more information (such as Jiro’s caskets), the decision-maker must choose based on the combination of the information within the body and the information attached to it.
So, you must kill m people in order to preserve at least m+1 people, and you have n people from which to choose. How would you do it?
Given available data, I would try to calculate a Societal Expected Value, something like a prediction of how many QALYs a person would save if they were reanimated. Select the m people with the lowest expected value.
Again given available data, In the event of a tie which contains [m, m+1], break the tie(s) by calculating the “badness index” based on current criminal justice practices (e.g. sum of the average lengths of sentences for all that person’s convicted crimes: murder > rape > petty theft, etc.).
Break subsequent ties containing [m, m+1] by selecting randomly.
That’s interesting. So the value of the person is entirely in his/her usefulness to the society?
calculating the “badness index” based on current criminal justice practices
Well, the problem we are discussing assumes that you do NOT have access to much data (certainly not their rap sheet or the lack thereof) about the frozen people—their name and whether their contemporaries thought them “good” or “bad” is all you have.
In fact, the core of the issue is whether you are willing to accept moral judgements from another time and culture to the extent of making life-and-death decisions on that basis.
So the value of the person is entirely in his/her usefulness to the society?
Not entirely. But it certainly trumps a person’s “badness” in my opinion.
Well, the problem we are discussing assumes that you do NOT have access to much data (certainly not their rap sheet or the lack thereof) about the frozen people—their name and whether their contemporaries thought them “good” or “bad” is all you have.
If the civilization reawakening them is capable of calculating an Expected Value for each person based only on their DNA (and other information contained in the body, such as irreversible injuries) which is more accurate than moral differences between the society which froze them and the society which may awaken them, then the moral judgements of the originating society are probably useless.
So, as a thought experiment, imagine that through elven magic you get to resurrect some (but not all) people from, say, XVI-century Europe. What you have is a bunch of coffins and some of them are marked “Bad person” and some are marked “Good person”—by people who buried them 500 years ago. Are you going to take these markings into account?
Would you answer differently if the coffins came from Pharaonic Egypt? An unnamed Neolithic village?
Also, if the “bad person” is an enemy leader in a war this raises other issues, like whether it is considered military aid if anti-war activists in the US resurrect Osama bin Laden to make a point (or if they just resurrect enemy soldiers and let those soldiers go home to make war on the US).
It depends on how much I trust the judgment that so-and-so is a bad person. Obviously, most of us won’t trust the judgment of a 16th century coffin scribbler much, if for no other reason than because this person doesn’t share our values. Even if they wrote “this person is bad because he committed murder” and “this person is bad because he’s a Jew”, that would only let me discard the judgments that obviously have mismatches with my values, but not let me discard the judgments with more subtle mismatches (such as whether he thinks it’s murder for a peasant to use self-defense against a lord).
If we’re discussing society-wide policy on resurrections, then I need to decide how I trust the judgment of the people in society who do resurrections as well as the judgment of the people who inscribed the coffins. I wouldn’t trust those people’s judgment except in extreme cases, like for someone who committed a serious violent crime and was convicted of it through a reasonably fair process.
In the case of people who couldn’t be convicted, either because they died before trial, or because they are a world leader who could not be put on trial, I think I would require some process that ensures that this doesn’t randomly get applied to anyone who is disliked. Otherwise, saying you can’t resurrect Hitler opens the door to saying “Israel is committing genocide on the Palestinians, resurrecting any dead Israeli leader is like resurrecting Hitler”.
If I’m interpreting correctly Lumifer’s intention with the thought experiment, then we shouldn’t expect future societies to pay any attention to our judgment of who was a good person and who wasn’t. By freezing ourselves we’d be basically jumping into the terrifying unknown.
Given the choice between resurrecting Hitler and resurrecting and random cryonically person, who would you choose? There may be compelling reasons to choose Hitler-maybe we are in need of some information which Hitler knew-but the probability that the random person was more bad than Hitler is extremely low, a proposition I can make in the absence of a rigid definition of badness. Nevertheless, this is an edge case-I would need a very compelling case about preservation of resources to even consider the question of who we should save, much less advocate preserving one person over another.
I think any society advanced enough to do cryonical revival won’t have to do random resurrections. They can analyse the bodies.
Information can be lost across generations. I know this is fictional evidence, but the “unfreezing Khan” scenario is a possibility.
I don’t know excatly to what “unfreezing Khan” scenario refers.
A body itself has to hold information to be revived.
The Enterprise crew revived Khan without knowing he had been a war criminal in the past. The historical records on that war were incomplete and could have given them no warning of who they were reviving.
(deleted)
Historical records on nearly any war are incomplete. War crimes of the winning side are seldomly documented well.
Why do you think the having been a war criminal in the past is good evidence that an individual would cause harm? It’s very unlikely that an individual who get’s revived is in a position to get a lot of political power.
You could probably make an argument to not thaw people who are still considered bad by the people doing the thawing. If you don’t have a lasting legacy then people of your time didn’t really consider you bad enough to worry about for the future.
There are two ways such an argument could go. One is that not-thawing is punishment. He was a bad guy and wasn’t punished enough during his lifetime, so now we will punish him more by not thawing him. That argument, as you imagine, has some problems.
The other way is to say that this guy is a danger (or, more generally, a net negative) to the current society. But there are problems here, too. Let’s take everyone’s favourite—Hitler. Is he really a danger to the society, say, a couple hundred years from now? Or maybe, given the lack of very unhappy Germany, he’ll settle down in some pastoral village, make speeches to the walls of his cottage and work on improving his art?
The question isn’t as easy as it looks.
I can think of two situations where you might want to do this:
The punishment consisted of execution.
The punishment given during his lifetime was one which lasts a certain period of time and he died before that period of time was over.
Followed by cryopreservation?? 8-0
You notice I made no effort to define or operationalize “badness”? The scenario in which the “bad” people aren’t reanimated is an edge case, a concession to the fact that some people are generally agreed to be “bad”, but a consistent and widely agreeable definition of badness is difficult to get right. I don’t know what it is, but If anyone does, I’m all ears!
Yes, but the point is that people who characterize somebody as “bad” or “good” and people who decide which bodies to jettison are different people who don’t necessarily share a vocabulary, never mind a common value system.
If you don’t define “bad”, then I don’t understand what does this mean:
I disagree. If you are the decision-maker and your decision algorithm includes “badness”, its your responsibility to define (and calculate) “badness”, based on the data available to you. This is key.
It seems to me that this whole scenario is roughly analogous to the Trolley Problem, with the twist that the decision-maker is given unknown access to unknown amounts of data about the people who will live or die. In a situation of minimal information (imagine caskets identified by a randomly-assigned ID, archived by a database which had long-since been lost), the decision-maker must choose the survivor based only on the information stored within the body (e.g. DNA, presence of extant uncured diseases, etc.). Given more information (such as Jiro’s caskets), the decision-maker must choose based on the combination of the information within the body and the information attached to it.
So, you must kill m people in order to preserve at least m+1 people, and you have n people from which to choose. How would you do it?
Given available data, I would try to calculate a Societal Expected Value, something like a prediction of how many QALYs a person would save if they were reanimated. Select the m people with the lowest expected value.
Again given available data, In the event of a tie which contains [m, m+1], break the tie(s) by calculating the “badness index” based on current criminal justice practices (e.g. sum of the average lengths of sentences for all that person’s convicted crimes: murder > rape > petty theft, etc.).
Break subsequent ties containing [m, m+1] by selecting randomly.
So, what data would you use and how?
That’s interesting. So the value of the person is entirely in his/her usefulness to the society?
Well, the problem we are discussing assumes that you do NOT have access to much data (certainly not their rap sheet or the lack thereof) about the frozen people—their name and whether their contemporaries thought them “good” or “bad” is all you have.
In fact, the core of the issue is whether you are willing to accept moral judgements from another time and culture to the extent of making life-and-death decisions on that basis.
Not entirely. But it certainly trumps a person’s “badness” in my opinion.
If the civilization reawakening them is capable of calculating an Expected Value for each person based only on their DNA (and other information contained in the body, such as irreversible injuries) which is more accurate than moral differences between the society which froze them and the society which may awaken them, then the moral judgements of the originating society are probably useless.