Another aspect which I have not seen discussed so far is of a quasi-philosophical nature:
What makes you “you”? Yes, it is possible that my brain could have been started up last Thursday, giving me the illusion of the memories I have. But say I made that same copy next Thursday. That being would have all of those memories that I have, and all prior associations and synapses, but if he had an accident with a nailgun the next day, I wouldn’t feel any physical pain. That is because he is not me. Even if he has almost identical physical circumstances, there is no continuity between him and me. If I died before the nailgun incident, it would not be me that felt the pain.
Instead, let’s say I froze my brain. My brain that makes me me, stops working. That makes the thing that is me cease to exist. The continuity stops there. The person that you revive 200 years into the future may have the same state that I left with, but it wouldn’t be me in the sense that makes you want to preserve “yourself” via cryonics in the first place. It defeats its own purpose.
If it is your pattern, your personality, your way of life that you want to preserve, you win. Parents attempt this all the time, tribes, nations, and cultures do it. I have replicas of myself and relatives on The Sims 3 that I identify with. He acts the same as if I was in that situation, so he is in effect a copy of me. If you downloaded your brain and copied it, you could have copies that would do just as you would, but the one in the body reading this comment right now, that’s the instantiation that you most likely care about preserving. Sure, it would be cool to have someone exactly like you living in the far future, but you’d only be the brain donor who died to create him.
The only way of preserving yourself indefinitely is to gradually transition your personality onto better hardware. Currently, it is my opinion that ethics is the only thing preventing the creation of a BCI suitable for the task.
My brain that makes me me, stops working. That makes the thing that is me cease to exist. The continuity stops there. The person that you revive 200 years into the future may have the same state that I left with, but it wouldn’t be me in the sense that makes you want to preserve “yourself” via cryonics in the first place. It defeats its own purpose.
When a wood frog freezes itself for the winter does it unfreeze as a different frog?
I have replicas of myself and relatives on The Sims 3 that I identify with. He acts the same as if I was in that situation, so he is in effect a copy of me.
What? If your friends asked questions to both you and your Sims 3 replica, you think they wouldn’t be able to tell which was which? It’s clearly not anywhere close to a copy of you.
When a wood frog freezes itself for the winter does it unfreeze as a different frog?
You’d have to ask the wood frog. If the wood frog were to ponder its own existence, it would be a different frog. In both cases, human and frog, it’s the same body before and after freezing. Note, frogs do freeze themselves and still function afterward, humans don’t. Humans have a lot of their energy invested in cognitive functions and they don’t act quite the same without it, and that’s what freezing demolishes.
What? If your friends asked questions to both you and your Sims 3 replica, you think they wouldn’t be able to tell which was which? It’s clearly not anywhere close to a copy of you.
The best answer to this would be to not reply, that would be highly amusing, but aside from that, I hope you understood that I meant in the Sims World, that is how I would react. Sims don’t even need to breath air, they don’t have lungs. Given the somewhat different conditions in Pleasant Valley, Sims have somewhat different requirements than humans. If I modified my human body to become a Sim, then I would be exactly what I designed there.
. If I modified my human body to become a Sim, then I would be exactly what I designed there.
I’m not sure I even understand what claim you’re making. Just to ask a simple question: if you performed the modifications you are envisioning, would you anticipate remembering having done so? If so, does your Sims 3 replica have a corresponding memory? Does it have the capacity to access such a memory, were one created for it?
If I performed the modifications I am envisioning, I do not anticipate the end product (the Sim version of me) to remember having done so. Sims have memories, but no memories regarding events outside of the Sims 3 environment. If human-me were to specifically write in that memory, a player of TS3 would see that memory, but the Sim himself would only manifest different behaviours based on the moodlet effects attached to that memory (akin to conditioning), but would not specifically understand what had actually happened.
It’s more in the sense of, “If I were a ladybug, how would I act?” Of course, I would act exactly like any other ladybug would, but The Sims are designed to look and act much more similar to humans than ladybugs are.
My point was going to be along the lines of “I as a human can identify with my Sim copy, given my social primate and cognitive human skills, and yes, a computer program that works exactly like me, or a brain image of me that is loaded onto a cylon, or a magically revived frozen body that my consciousness used to be running on, I can identify with all of those things too, but I am aware that the me from right now is not going to be in any of those. Gradual hardware upgrade is the only thing which will preserve the future-body descendants of now-me. And while near-death-me might disagree, now-me is actually okay with just replication. I’m just surprised to see so many people who are otherwise rational turn a blind eye to this issue.
Hey, what are you reading downvoted comments for? :)
For my own part, if I get to choose now between a future where something exists that remembers being me but has absolutely no continuity with my current body (that is, is not a “future-body descendant of now-me”), and a future where something exists that has continuity with my current body but does not remember being me, I choose the former. (Of course, both might suck, depending on other details.)
Whether either of them is “really me” seems like a confused question.
And, sure, within the space of possible configurations of a system with many fewer degrees of variation than I have, it’s possible to select the most me-like available configuration and identify with it on those grounds, which allows me to single out a particular Sim, or a particular ladybug, as being “me”. I consider this sort of identification to be similar to how people identify with a football team or a rock band, though, and not particularly relevant to what we’re talking about when we talk about preserving individual identity in an artificial matrix.
I consider this sort of identification to be similar to how people identify with a football team or a rock band, though, and not particularly relevant to what we’re talking about when we talk about preserving individual identity in an artificial matrix.
Now we’re getting somewhere! What I am trying to say is that when we are talking about preserving individual identity in an artificial matrix, we are mistakenly identifying with the copy because it closely resembles us, it’s “Team Me”, but really it is “Me” that we want to preserve, not Team Me.
Well, I agree that what gets preserved within an artificial matrix is in an important sense “Team Me” rather than “Me”. But I would say the same thing about what gets preserved within a future-body descendent of now-me.
Whereas it sounds like you would say that what gets preserved within a future-body descendent of now-me is really “Me” rather than “Team Me”… yes? If so, what grounds do you have for believing that?
More generally, I think the concept of “Me” as distinct from various degrees of “Team Me” membership is confused and doesn’t carve reality at its joints. There’s no such thing; all there is is various degrees of “Team Me” membership.
I also think that the degree of “Team Me” membership a Sim or a ladybug is capable of is radically different from (and inferior to) the degree of “Team Me” membership a high-fidelity copy of now-me or a future-body descendent of now-me can have, such that equating the two is importantly misleading, though in a technical sense accurate.
Whereas it sounds like you would say that what gets preserved within a future-body descendent of now-me is really “Me” rather than “Team Me”… yes? If so, what grounds do you have for believing that?...More generally, I think the concept of “Me” as distinct from various degrees of “Team Me” membership is confused and doesn’t carve reality at its joints. There’s no such thing; all there is is various degrees of “Team Me” membership.
Agh! You just killed “Me”! Thank you!
It is true, the only distinction that I had really made between Me and Team-Me was classical physical continuity, that is the only place I could see to draw a line. If there is no line, and yes, I fearfully agree with you on that, then my reason for uploading or freezing (aside from survival-instinct projection) is to preserve something that runs on the same or similar programming as the rest of Team-Me. From an objective point of view, I wouldn’t really consider my pattern worth preserving. What can Team Me do that a sufficiently advanced (and most likely more efficient) AI couldn’t do better?
You sound like you’re implicitly treating that “objective” point of view as more important than your actual (presumably subjective) point of view. Is that true? If so, on what grounds?
To answer your question, that is not true. The objective point of view and subjective points of view are equal, because they are just different points of view. If it sounds like I consider it more important, it is only because my mind actually does agree more with the objective view. At this point in time the subjective point of view is foreign to me. I find the big picture so fascinating that I become less concerned with my own part in it, but I don’t expect that of anyone but me. I am more interested in creating something good than preserving something flawed, even if that thing happens to be me.
Instead, let’s say I froze my brain. My brain that makes me me, stops working. That makes the thing that is me cease to exist. The continuity stops there.
How is sleep, unconsciousness, deep anesthesia any different, though?
But further, why is continuity important? If intelligence can be simulated on a computer, and it seems likely that intelligence sophisticated enough to ponder it’s own consciousness probably really is conscious, why would a reboot have any effect on its identity?
How is sleep, unconsciousness, deep anesthesia any different, though?...But further, why is continuity important?
Those two questions are two sides of the same coin to me. Those examples preserve continuity in the form of synapses and other neural connections. In none of those cases does the brain actually stop running, just the consciousness program. You can’t just pull out someone’s heart while they’re anesthetized—if the brain’s cells die from lack of fuel, you’re destroying the hardware that the consciousness program needs to reboot from.
If intelligence can be simulated on a computer, and it seems likely that intelligence sophisticated enough to ponder it’s own consciousness probably really is conscious, why would a reboot have any effect on its identity?
Assuming that you have programmed it to care about its own consciousness, not just to ponder it, the first boot would die, and the reboot would wake up thinking it was the first boot.
...cryonics is probably unnecessary if I can instead do a molecule-level brain-image upload before death (assuming that turns out to be possible). But if that’s so, don’t we also need to reject the idea of a personal future?
When you upload your brain-image, please make the most of your life after that, because it would be the same as with the computer. You will die in fear and loneliness, and your copy will wake up convinced he is you. (That would make a great fortune-cookie message!) In both cryonic preservation, and brain upload, the original quantum system which is you is being shut down—no splitting realities are involved here (except the usual ones)-- you are going to experience death, and it was my understanding that the point of cryonics and mind transfer was to avoid experiencing death. (By “experience death”, I mean that your mind-pattern ceases to function.) Anyone deriving comfort from those two methods should seriously consider this concrete downside to them.
Assuming that you have programmed it to care about its own consciousness, not just to ponder it, the first boot would die, and the reboot would wake up thinking it was the first boot.
But if a consciousness can be simulated on a computer running at multiple GHz, would not a simulation on a computer running at one cycle per hour also be consciousness? And then if you removed power from the computer for the hour between each cycle, is there any reason to think that would affect the simulation?
My intuition as well. Continuity seems less of a big deal when we imagine computer hardware intelligence scenarios.
As another scenario, imagine a computer based on light waves alone; it’s hard to see how a temporary blocking of the input light wave, for example, could cause anything as substantial as the end of a conscious entity.
Perhaps I misunderstood what you meant by “reboot”. The situation you are describing now preserves continuity, therefore is not death. In the first situation, I assumed that information was being erased. Similarly, neural cellular death corrupts the entire program. If there was a way to instantly stop a human brain and restart the same brain later, that would not be death, but freezing yourself now does not accomplish that, nor does copying a brain.
(Unimportant note: it wasn’t I who brought up reboots.)
Anyway, I believe that’s why cryonics advocates believe it works. Their argument is that all relevant information is stored in the synapses, etc., which information about is preserved with sufficient fidelity during vitrification. I’m not sure about the current state of cryopreservatives, but a good enough antifreeze ought to be even able to vitrify neurons without ‘killing’ them. Meaning they can be restarted after thawing. In any case cellular death should not “corrupt the entire program” because as long as no important information is lost, we can repair it all.
I’m much less confident about the idea of uploading one’s mind into a computer as a way of survival since that involves all sorts of confusing stuff like copies and causality.
Instead, let’s say I froze my brain. My brain that makes me me, stops working. That makes the thing that is me cease to exist. The continuity stops there. The person that you revive 200 years into the future may have the same state that I left with, but it wouldn’t be me in the sense that makes you want to preserve “yourself” via cryonics in the first place. It defeats its own purpose.
Another aspect which I have not seen discussed so far is of a quasi-philosophical nature: What makes you “you”? Yes, it is possible that my brain could have been started up last Thursday, giving me the illusion of the memories I have. But say I made that same copy next Thursday. That being would have all of those memories that I have, and all prior associations and synapses, but if he had an accident with a nailgun the next day, I wouldn’t feel any physical pain. That is because he is not me. Even if he has almost identical physical circumstances, there is no continuity between him and me. If I died before the nailgun incident, it would not be me that felt the pain.
Instead, let’s say I froze my brain. My brain that makes me me, stops working. That makes the thing that is me cease to exist. The continuity stops there. The person that you revive 200 years into the future may have the same state that I left with, but it wouldn’t be me in the sense that makes you want to preserve “yourself” via cryonics in the first place. It defeats its own purpose.
If it is your pattern, your personality, your way of life that you want to preserve, you win. Parents attempt this all the time, tribes, nations, and cultures do it. I have replicas of myself and relatives on The Sims 3 that I identify with. He acts the same as if I was in that situation, so he is in effect a copy of me. If you downloaded your brain and copied it, you could have copies that would do just as you would, but the one in the body reading this comment right now, that’s the instantiation that you most likely care about preserving. Sure, it would be cool to have someone exactly like you living in the far future, but you’d only be the brain donor who died to create him.
The only way of preserving yourself indefinitely is to gradually transition your personality onto better hardware. Currently, it is my opinion that ethics is the only thing preventing the creation of a BCI suitable for the task.
When a wood frog freezes itself for the winter does it unfreeze as a different frog?
What? If your friends asked questions to both you and your Sims 3 replica, you think they wouldn’t be able to tell which was which? It’s clearly not anywhere close to a copy of you.
You’d have to ask the wood frog. If the wood frog were to ponder its own existence, it would be a different frog. In both cases, human and frog, it’s the same body before and after freezing. Note, frogs do freeze themselves and still function afterward, humans don’t. Humans have a lot of their energy invested in cognitive functions and they don’t act quite the same without it, and that’s what freezing demolishes.
The best answer to this would be to not reply, that would be highly amusing, but aside from that, I hope you understood that I meant in the Sims World, that is how I would react. Sims don’t even need to breath air, they don’t have lungs. Given the somewhat different conditions in Pleasant Valley, Sims have somewhat different requirements than humans. If I modified my human body to become a Sim, then I would be exactly what I designed there.
I’m not sure I even understand what claim you’re making. Just to ask a simple question: if you performed the modifications you are envisioning, would you anticipate remembering having done so? If so, does your Sims 3 replica have a corresponding memory? Does it have the capacity to access such a memory, were one created for it?
If I performed the modifications I am envisioning, I do not anticipate the end product (the Sim version of me) to remember having done so. Sims have memories, but no memories regarding events outside of the Sims 3 environment. If human-me were to specifically write in that memory, a player of TS3 would see that memory, but the Sim himself would only manifest different behaviours based on the moodlet effects attached to that memory (akin to conditioning), but would not specifically understand what had actually happened.
It’s more in the sense of, “If I were a ladybug, how would I act?” Of course, I would act exactly like any other ladybug would, but The Sims are designed to look and act much more similar to humans than ladybugs are.
My point was going to be along the lines of “I as a human can identify with my Sim copy, given my social primate and cognitive human skills, and yes, a computer program that works exactly like me, or a brain image of me that is loaded onto a cylon, or a magically revived frozen body that my consciousness used to be running on, I can identify with all of those things too, but I am aware that the me from right now is not going to be in any of those. Gradual hardware upgrade is the only thing which will preserve the future-body descendants of now-me. And while near-death-me might disagree, now-me is actually okay with just replication. I’m just surprised to see so many people who are otherwise rational turn a blind eye to this issue.
Hey, what are you reading downvoted comments for? :)
I’m a rebel that way.
Thanks for clarifying.
For my own part, if I get to choose now between a future where something exists that remembers being me but has absolutely no continuity with my current body (that is, is not a “future-body descendant of now-me”), and a future where something exists that has continuity with my current body but does not remember being me, I choose the former. (Of course, both might suck, depending on other details.)
Whether either of them is “really me” seems like a confused question.
And, sure, within the space of possible configurations of a system with many fewer degrees of variation than I have, it’s possible to select the most me-like available configuration and identify with it on those grounds, which allows me to single out a particular Sim, or a particular ladybug, as being “me”. I consider this sort of identification to be similar to how people identify with a football team or a rock band, though, and not particularly relevant to what we’re talking about when we talk about preserving individual identity in an artificial matrix.
Now we’re getting somewhere! What I am trying to say is that when we are talking about preserving individual identity in an artificial matrix, we are mistakenly identifying with the copy because it closely resembles us, it’s “Team Me”, but really it is “Me” that we want to preserve, not Team Me.
Well, I agree that what gets preserved within an artificial matrix is in an important sense “Team Me” rather than “Me”. But I would say the same thing about what gets preserved within a future-body descendent of now-me.
Whereas it sounds like you would say that what gets preserved within a future-body descendent of now-me is really “Me” rather than “Team Me”… yes? If so, what grounds do you have for believing that?
More generally, I think the concept of “Me” as distinct from various degrees of “Team Me” membership is confused and doesn’t carve reality at its joints. There’s no such thing; all there is is various degrees of “Team Me” membership.
I also think that the degree of “Team Me” membership a Sim or a ladybug is capable of is radically different from (and inferior to) the degree of “Team Me” membership a high-fidelity copy of now-me or a future-body descendent of now-me can have, such that equating the two is importantly misleading, though in a technical sense accurate.
Agh! You just killed “Me”! Thank you! It is true, the only distinction that I had really made between Me and Team-Me was classical physical continuity, that is the only place I could see to draw a line. If there is no line, and yes, I fearfully agree with you on that, then my reason for uploading or freezing (aside from survival-instinct projection) is to preserve something that runs on the same or similar programming as the rest of Team-Me. From an objective point of view, I wouldn’t really consider my pattern worth preserving. What can Team Me do that a sufficiently advanced (and most likely more efficient) AI couldn’t do better?
You sound like you’re implicitly treating that “objective” point of view as more important than your actual (presumably subjective) point of view.
Is that true?
If so, on what grounds?
To answer your question, that is not true. The objective point of view and subjective points of view are equal, because they are just different points of view. If it sounds like I consider it more important, it is only because my mind actually does agree more with the objective view. At this point in time the subjective point of view is foreign to me. I find the big picture so fascinating that I become less concerned with my own part in it, but I don’t expect that of anyone but me. I am more interested in creating something good than preserving something flawed, even if that thing happens to be me.
How is sleep, unconsciousness, deep anesthesia any different, though?
But further, why is continuity important? If intelligence can be simulated on a computer, and it seems likely that intelligence sophisticated enough to ponder it’s own consciousness probably really is conscious, why would a reboot have any effect on its identity?
In any case, I don’t have any answers. Eliezer’s Identity Isn’t In Specific Atoms for me seems to suggest cryonics is probably unnecessary if I can instead do a molecule-level brain-image upload before death (assuming that turns out to be possible). But if that’s so, don’t we also need to reject the idea of a personal future?
Those two questions are two sides of the same coin to me. Those examples preserve continuity in the form of synapses and other neural connections. In none of those cases does the brain actually stop running, just the consciousness program. You can’t just pull out someone’s heart while they’re anesthetized—if the brain’s cells die from lack of fuel, you’re destroying the hardware that the consciousness program needs to reboot from.
Assuming that you have programmed it to care about its own consciousness, not just to ponder it, the first boot would die, and the reboot would wake up thinking it was the first boot.
When you upload your brain-image, please make the most of your life after that, because it would be the same as with the computer. You will die in fear and loneliness, and your copy will wake up convinced he is you. (That would make a great fortune-cookie message!) In both cryonic preservation, and brain upload, the original quantum system which is you is being shut down—no splitting realities are involved here (except the usual ones)-- you are going to experience death, and it was my understanding that the point of cryonics and mind transfer was to avoid experiencing death. (By “experience death”, I mean that your mind-pattern ceases to function.) Anyone deriving comfort from those two methods should seriously consider this concrete downside to them.
But if a consciousness can be simulated on a computer running at multiple GHz, would not a simulation on a computer running at one cycle per hour also be consciousness? And then if you removed power from the computer for the hour between each cycle, is there any reason to think that would affect the simulation?
My intuition as well. Continuity seems less of a big deal when we imagine computer hardware intelligence scenarios.
As another scenario, imagine a computer based on light waves alone; it’s hard to see how a temporary blocking of the input light wave, for example, could cause anything as substantial as the end of a conscious entity.
However, if I think too much about light waves and computers, I’m reminded of the LED cellular-automaton computationalist thought experiment and start to have nagging doubts about computer consciousness.
Perhaps I misunderstood what you meant by “reboot”. The situation you are describing now preserves continuity, therefore is not death. In the first situation, I assumed that information was being erased. Similarly, neural cellular death corrupts the entire program. If there was a way to instantly stop a human brain and restart the same brain later, that would not be death, but freezing yourself now does not accomplish that, nor does copying a brain.
(Unimportant note: it wasn’t I who brought up reboots.)
Anyway, I believe that’s why cryonics advocates believe it works. Their argument is that all relevant information is stored in the synapses, etc., which information about is preserved with sufficient fidelity during vitrification. I’m not sure about the current state of cryopreservatives, but a good enough antifreeze ought to be even able to vitrify neurons without ‘killing’ them. Meaning they can be restarted after thawing. In any case cellular death should not “corrupt the entire program” because as long as no important information is lost, we can repair it all.
I’m much less confident about the idea of uploading one’s mind into a computer as a way of survival since that involves all sorts of confusing stuff like copies and causality.
How do you know that?