My conclusion was that if someone had shot Beckett in order to protect the corpsicle I would have been indifferent
Really? If I assume that you’re operating solely to maximize number of years lived (which I think you are) doesn’t this imply that you think that the corpsicle has a higher probability of living forever than Beckett does? Even if you assume Beckett won’t get cryonics (hopefully it will become more mainstream by the time she dies though) she will likely live another 40-50 years. And I could be wrong (I really have little evidence on it, except the opinions of those people on this site) but I thought that most people considered p(singularity in the next 40 to 50 years) to be vastly higher than p(cryonics works and nothing goes wrong)
Really? If I assume that you’re operating solely to maximize number of years lived (which I think you are) doesn’t this imply that you think that the corpsicle has a higher probability of living forever than Beckett does?
Yes it would but no I’m not. I never optimize for maximising the number of lives lived. Especially not in the sense that assumes I must care about everyone equally.
In this case what I was describing is my predicted emotional affect in the counterfactual fictional circumstances in which Beckett dies while attempting headslaughter. I find that much like I don’t identify with and care about the suffering of those people who Beckett destroys the lives of via imprisonment or execution I would not identify with and care about the death of Beckett while she was perpetrating a significant ethical transgression. Perhaps more importantly I consider it ethically permissable to defend yourself, your loved ones, the helpless and the innocent from violation, including with deadly force. Where most instances of deadly force perpetrated upon another would provoke empathy for the victim it makes a big difference if the victim was the one instigating the conflict.
I thought that most people considered p(singularity in the next 40 to 50 years) to be vastly higher than p(cryonics works and nothing goes wrong)
I’m not sure that’s a universal feeling. I would certainly would put it the other way around.
But there’s another issue that is going on here. Many human moral systems allow one to kill others to save a life. Under many moral frameworks if a person of age x is trying to kill someone of age y > x, it is still ok to kill x. Many different justifications for this are given, but it seems to me that the basis might be best thought of in terms of decision theory as a strong precommitment to stop murderers even if it means killing them. If in such a framework one views a cryonicly preserved individual as morally akin to a living person then this makes sense. Moreover, even if one does agree with the calculation that you suggested, similar lines of emotional pulls (and tribal loyalty to people who engage in or support cryonics) could easily move ones attitude towards indifferent.
I’m not sure that’s a universal feeling. I would certainly would put it the other way around.
What are your exact (or, I suppose, approximated) probabilities? Maybe I don’t have a good sense of this.
Many human moral systems allow one to kill others to save a life.
Well yes. This brings up a whole host of issues though. The best analogy to the point I’d like to make would be abortion. Similar to fetuses, human moral systems do not view corpsicles as “lives”. Which is why those moral systems you mentioned wouldn’t approve of murdering someone to save a corpsicle, just as it would be widely considered “wrong” to shoot an abortion doctor. Whether or not it actually is wrong is not my point, but only that you can’t adopt the “many human moral systems allow this” argument without paying any attention to the fact that human moral systems wouldn’t see it as a life anyway.
In the same vein, I don’t think your decision theory argument pans out either, because following the rule “kill people who are trying to murder corpsicles” doesn’t equate to “kill people who are murdering other people” in the view of, for all intents and purposes, everyone, and ergo doesn’t have the intended deterrent effect. In fact, the effect it would have would be to get you jailed, a newspaper story about the “crazy transhumanist who shot someone to save a corpsicle”, and, well, the corpsicle will just be killed anyway once you’re imprisoned.
What are your exact (or, I suppose, approximated) probabilities
I’d estimate around a 5% chance for cryonics to work in some form, and a 1% chance of a Singularity-type event, broadly construed, in the next 40 years.
The best analogy to the point I’d like to make would be abortion. Similar to fetuses, human moral systems do not view corpsicles as “lives”.
Huh? Many moral systems do see fetuses as lives. That’s part of why abortion is so controversial. Moreover, what matters is not whether those systems normally see cryonics patients as alive, but that they have a general rule about saving lives. So if one takes one of those systems and then expands the set of people considered having moral weight to include the cryopatients, then the result follows. One should’t be surprised if here where a lot of people take cryonics seriously, they will have modified pre-existing moral systems to give weight to those already cryonicly preserved.
I don’t think many moral systems truly see them as lives on the same level that adult humans are. And some don’t see them as lives at all.
One should’t be surprised if here where a lot of people take cryonics seriously, they will have modified pre-existing moral systems to give weight to those already cryonicly preserved.
Well, of course. I’m not saying one should be. But you’re offering “many moral systems” like the populous at large is any real expert on morality. You’re appealing to the authority of the populous, which is an authority that many of us think is, well… pretty dumb.
Sorry, if I’ve been unclear. I’m not saying they are correct in general. Nor am I even defending wedrifid’s views. You seemed to ask where those views came from and I was trying to answer. Explaining from where a set of ethical/moral values arises is not the same as saying that they are correct.
I don’t know if his attitude is wrong or not. I really haven’t given the question enough thought to answer it either way. Moreover, it shouldn’t be an insult to explain how a given ethical attitude can develop whether or not one thinks the view is correct. I’m not sure why you think that would be an insult. Is it because there’s a common approach of dismissing the views of those they disagree with by giving psychological explanations for why someone would want to think that? Or is there something more subtle that I’m missing here?
Well perhaps not an insult. But it seems like what you are saying is “This is why I think he might think that but I think he’s wrong.” If you already think he believes something for a reason you believe is wrong, you don’t have a very high opinion of his rationality.
If you already think he believes something for a reason you believe is wrong, you don’t have a very high opinion of his rationality.
If your goal is to improve, it’s more important to notice and correct errors than deceive people about their absence. I believe it’s insulting, not respectful, to attribute to a rationalist the attitude that they would prefer the knowledge of a flaw withheld.
(You might want to take a precaution of asking first if Crocker’s rules apply, and communicate the bug report privately.)
I’m very confused. I wasn’t talking about a bug report. Unless you mean bug in rationality.
Furthermore, I never attributed that attitude to JoshuaZ. JoshuaZ had no evidence that the flaw he proposed is wedrifid’s thinking. He’s just selecting one potential reason out of the whole set of potential reasons.
Ah, I see. So to say “I’m not defending claim X” sounds more like “I disagree with X” than “I feel confused about X”. I don’t know how universal that is.
If you already think he believes something for a reason you believe is wrong, you’re not putting much faith in his rationality.
Really? You seem to be radically overestimating human rationality in general. We all likely believe things for reasons that are too weak to justify our levels of belief, or believe things due to cultural upbringing and other reasons which have zero actual evidentiary weight. Part of the task of becoming more rational is identifying those issues and dealing with them, especially the higher priority things that impact a lot of other beliefs. Everyone here, including myself, likely believes things for bad reasons. In that context, discussing where beliefs come from seems natural.
I think that wedrifid is one of the more careful, rational and thought provoking people here. That doesn’t mean that he’s a perfect rationalist.
Really? If I assume that you’re operating solely to maximize number of years lived (which I think you are) doesn’t this imply that you think that the corpsicle has a higher probability of living forever than Beckett does? Even if you assume Beckett won’t get cryonics (hopefully it will become more mainstream by the time she dies though) she will likely live another 40-50 years. And I could be wrong (I really have little evidence on it, except the opinions of those people on this site) but I thought that most people considered p(singularity in the next 40 to 50 years) to be vastly higher than p(cryonics works and nothing goes wrong)
Yes it would but no I’m not. I never optimize for maximising the number of lives lived. Especially not in the sense that assumes I must care about everyone equally.
In this case what I was describing is my predicted emotional affect in the counterfactual fictional circumstances in which Beckett dies while attempting headslaughter. I find that much like I don’t identify with and care about the suffering of those people who Beckett destroys the lives of via imprisonment or execution I would not identify with and care about the death of Beckett while she was perpetrating a significant ethical transgression. Perhaps more importantly I consider it ethically permissable to defend yourself, your loved ones, the helpless and the innocent from violation, including with deadly force. Where most instances of deadly force perpetrated upon another would provoke empathy for the victim it makes a big difference if the victim was the one instigating the conflict.
I’m not sure that’s a universal feeling. I would certainly would put it the other way around.
But there’s another issue that is going on here. Many human moral systems allow one to kill others to save a life. Under many moral frameworks if a person of age x is trying to kill someone of age y > x, it is still ok to kill x. Many different justifications for this are given, but it seems to me that the basis might be best thought of in terms of decision theory as a strong precommitment to stop murderers even if it means killing them. If in such a framework one views a cryonicly preserved individual as morally akin to a living person then this makes sense. Moreover, even if one does agree with the calculation that you suggested, similar lines of emotional pulls (and tribal loyalty to people who engage in or support cryonics) could easily move ones attitude towards indifferent.
What are your exact (or, I suppose, approximated) probabilities? Maybe I don’t have a good sense of this.
Well yes. This brings up a whole host of issues though. The best analogy to the point I’d like to make would be abortion. Similar to fetuses, human moral systems do not view corpsicles as “lives”. Which is why those moral systems you mentioned wouldn’t approve of murdering someone to save a corpsicle, just as it would be widely considered “wrong” to shoot an abortion doctor. Whether or not it actually is wrong is not my point, but only that you can’t adopt the “many human moral systems allow this” argument without paying any attention to the fact that human moral systems wouldn’t see it as a life anyway.
In the same vein, I don’t think your decision theory argument pans out either, because following the rule “kill people who are trying to murder corpsicles” doesn’t equate to “kill people who are murdering other people” in the view of, for all intents and purposes, everyone, and ergo doesn’t have the intended deterrent effect. In fact, the effect it would have would be to get you jailed, a newspaper story about the “crazy transhumanist who shot someone to save a corpsicle”, and, well, the corpsicle will just be killed anyway once you’re imprisoned.
I’d estimate around a 5% chance for cryonics to work in some form, and a 1% chance of a Singularity-type event, broadly construed, in the next 40 years.
Huh? Many moral systems do see fetuses as lives. That’s part of why abortion is so controversial. Moreover, what matters is not whether those systems normally see cryonics patients as alive, but that they have a general rule about saving lives. So if one takes one of those systems and then expands the set of people considered having moral weight to include the cryopatients, then the result follows. One should’t be surprised if here where a lot of people take cryonics seriously, they will have modified pre-existing moral systems to give weight to those already cryonicly preserved.
I don’t think many moral systems truly see them as lives on the same level that adult humans are. And some don’t see them as lives at all.
Well, of course. I’m not saying one should be. But you’re offering “many moral systems” like the populous at large is any real expert on morality. You’re appealing to the authority of the populous, which is an authority that many of us think is, well… pretty dumb.
Sorry, if I’ve been unclear. I’m not saying they are correct in general. Nor am I even defending wedrifid’s views. You seemed to ask where those views came from and I was trying to answer. Explaining from where a set of ethical/moral values arises is not the same as saying that they are correct.
Well it seems like an insult to wedrifid to offer an explanation for his actions that you think is wrong.
I don’t know if his attitude is wrong or not. I really haven’t given the question enough thought to answer it either way. Moreover, it shouldn’t be an insult to explain how a given ethical attitude can develop whether or not one thinks the view is correct. I’m not sure why you think that would be an insult. Is it because there’s a common approach of dismissing the views of those they disagree with by giving psychological explanations for why someone would want to think that? Or is there something more subtle that I’m missing here?
Well perhaps not an insult. But it seems like what you are saying is “This is why I think he might think that but I think he’s wrong.” If you already think he believes something for a reason you believe is wrong, you don’t have a very high opinion of his rationality.
If your goal is to improve, it’s more important to notice and correct errors than deceive people about their absence. I believe it’s insulting, not respectful, to attribute to a rationalist the attitude that they would prefer the knowledge of a flaw withheld.
(You might want to take a precaution of asking first if Crocker’s rules apply, and communicate the bug report privately.)
I’m very confused. I wasn’t talking about a bug report. Unless you mean bug in rationality.
Furthermore, I never attributed that attitude to JoshuaZ. JoshuaZ had no evidence that the flaw he proposed is wedrifid’s thinking. He’s just selecting one potential reason out of the whole set of potential reasons.
Ah, I see. So to say “I’m not defending claim X” sounds more like “I disagree with X” than “I feel confused about X”. I don’t know how universal that is.
Really? You seem to be radically overestimating human rationality in general. We all likely believe things for reasons that are too weak to justify our levels of belief, or believe things due to cultural upbringing and other reasons which have zero actual evidentiary weight. Part of the task of becoming more rational is identifying those issues and dealing with them, especially the higher priority things that impact a lot of other beliefs. Everyone here, including myself, likely believes things for bad reasons. In that context, discussing where beliefs come from seems natural.
I think that wedrifid is one of the more careful, rational and thought provoking people here. That doesn’t mean that he’s a perfect rationalist.