Just two minutes ago, a very good anti-cryonics argument appeared to me. This is not my opinion, just my solution to an intellectual puzzle. Note that it is not directly relevant to the original post: I will not claim that the technology does not work. I will claim that it is not useful for me.
Let us first assume that I don’t care too much about my future self, in the simple sense that I don’t exercise, I eat unhealthy food, etc. Most of us are like that, and this is not irrational behavior: We simply heavily discount the well-being of our future selves, even using a time-based cutoff. (Cutoff is definitely necessary: If a formalized decision theory infinitely penalizes eating foie gras, then I’ll skip the decision theory rather than foie gras. :) )
Now comes the argument: If I sign up for cryonics, I’ll have serious incentives to get frozen sooner rather than later. I fear that these incentives consciously or unconsciously influence my future decisions in a way I currently do not prefer. Ergo cryonics is not for me.
What are the incentives? Basically they all boil down to this: I would want my post-cryo personality to be more rather than less similar to my current personality. If they revive my 100 years old self, there will be a practical problem (many of his brain cells are already dead, he is half the man he used to be) and a conceptual problem (his ideas about the world will quite possibly heavily diverge from my ideas, and this divergence will be a result of decay rather than progress).
To be honest, I’d prefer that discussion here stay focussed on the things that I raise in the article rather than becoming another general discussion about cryonics. My blog post on the subject ends with a “comment policy” that asks commenters to stay focussed on technical feasibility, and to avoid presenting novel arguments against technical feasibility in the comments, since if you accept what I argue, any such argument that had merit would deserve more prominence than a comment.
Just two minutes ago, a very good anti-cryonics argument appeared to me. This is not my opinion, just my solution to an intellectual puzzle. Note that it is not directly relevant to the original post: I will not claim that the technology does not work. I will claim that it is not useful for me.
Let us first assume that I don’t care too much about my future self, in the simple sense that I don’t exercise, I eat unhealthy food, etc. Most of us are like that, and this is not irrational behavior: We simply heavily discount the well-being of our future selves, even using a time-based cutoff. (Cutoff is definitely necessary: If a formalized decision theory infinitely penalizes eating foie gras, then I’ll skip the decision theory rather than foie gras. :) )
Now comes the argument: If I sign up for cryonics, I’ll have serious incentives to get frozen sooner rather than later. I fear that these incentives consciously or unconsciously influence my future decisions in a way I currently do not prefer. Ergo cryonics is not for me.
What are the incentives? Basically they all boil down to this: I would want my post-cryo personality to be more rather than less similar to my current personality. If they revive my 100 years old self, there will be a practical problem (many of his brain cells are already dead, he is half the man he used to be) and a conceptual problem (his ideas about the world will quite possibly heavily diverge from my ideas, and this divergence will be a result of decay rather than progress).
To be honest, I’d prefer that discussion here stay focussed on the things that I raise in the article rather than becoming another general discussion about cryonics. My blog post on the subject ends with a “comment policy” that asks commenters to stay focussed on technical feasibility, and to avoid presenting novel arguments against technical feasibility in the comments, since if you accept what I argue, any such argument that had merit would deserve more prominence than a comment.