What does it mean to be immortal? We haven’t solved key questions of personal identity yet. What is it for one personal identity to persist?
woodchopper
Currently it’s pretty commonly believed that the end state of the universe is decayed particles moving away from every other particle at faster than the speed of light, therefore existing in an eternal and inescapable void. If you only have one particle you can’t do calculations.
The thing is, I’m just not sure if it’s even a reasonable thing to talk about ‘immortality’ because I don’t know what it means for one personal identity (‘soul’) to persist. I couldn’t be sure if a computer simulated my mind it would be ‘me’, for example. Immortality will likely involve serious changes to the physical form our mind takes, and once you start talking about that you get into the realm of thought experiments like the idea that if you put someone under a general anaesthetic, take out one atom from their brain, then wake them up, you have a similar person but not the one who originally went under the anaesthetic. So from the perspective of the original person, undergoing their operation was pointless, because they are dead anyway. The person who wakes from the operation is someone else entirely.
I guess I’m just trying to say that immortality makes heaps of sense if we can somehow solve the question of personal identity, but if we can’t, then ‘immortality’ may be pretty nonsensical to talk about, simply because if we cannot say what it takes for a single ‘soul’ to persist over time, the very concept of ‘immortality’ may be ill-defined.
I like your post about the heat death of the universe, if you ever figure anything out regarding the persistence of a personal identity, I’d like you to message me or something.
Can you elaborate on the concept of a connection through “moment-to-moment identity”? Would for example “mind uploading” break such a thing?
If there’s no objective right answer, then what does it mean to seek immortality? For example, if we found out that a simulation of ‘you’ is not actually ‘you’, would seeking immortality mean we can’t upload our minds to machines and have to somehow figure out a way to keep the pink fleshy stuff that is our current brains around?
If we found out that there’s a new ‘you’ every time you go to sleep and wake up, wouldn’t it make sense to abandon the quest for immortality as we already die every night?
(Note, I don’t actually think this happens. But I think the concept of personal identity is inextricably linked to the question of how separate consciousnesses, each feeling their own qualia, can arise.)
I think consciousness arises from physical processes (as Denett says), but that’s not really solving the problem or proving it doesn’t exist.
Anyway, I think you are right in that if you think being mind-uploaded does or does not constitute continuing your personal identity or whatever, it’s hard to say you are wrong. However, what if I don’t actually know if it does, yet I want to be immortal? Then we have to study that to figure out what things we can do keep the real ‘us’ existing and what don’t.
What if the persistence of personal identity is a meaningless pursuit?
Why would something that is not atom to atom exactly what you are now be ‘you’?
So, let’s say you die, but a super intelligence reconstructs your brain (using new atoms, but almost exactly to specification), but misplaces a couple of atoms. Is that ‘you’?
If it is, let’s say the computer then realises what it did wrong and reconstructs your brain again (leaving its first prototype intact), this time exactly. Which one is ‘you’?
Let’s say the second one is ‘you’, and the first one isn’t. What happens when the computer reconstructs yet another exact copy of your brain?
If the computer told you it was going to torture the slightly-wrong copy of you (the one with a few atoms missing), would that scare you?
What if it was going to torture the exact copy of you, but only one of the exact copies? There’s a version of you not being tortured, what’s to say that won’t be the real ‘you’?
Wouldn’t there, then, be some copies of me not being tortured and one that is being tortured?
If I copied your brain right now, but left you alive, and tortured the copy, you would not feel any pain (I assume). I could even torture it secretly and you would be none the wiser.
So go back to the scenario—you’re killed, there are some exact copies made of your brain and some inexact copies. It has been shown that it is possible to torture an exact copy of your brain while not torturing ‘you’, so surely you could torture one or all of these reconstructed brains and you would have no reason to fear?
So, the graph model of identity sort of works, but I feel it doesn’t quite get to the real meat of identity. I think the key is in how two vertices of the identity graph are linked and what it means for them to be linked. Because I don’t think the premise that a person is the same person they were a few moments ago is necessarily justified, and in some situations it doesn’t meld with intuition. For example, a person’s brain is a complex machine; imagine it were (using some extremely advanced technology) modified seriously while a person was still conscious. So, it’s being modified all the time as one learns new information, has new experiences, takes new substances, etc, but let’s imagine it was very dramatically modified. So much so that over the course of a few minutes, one person who once had the personality and memories of, say, you, ended up having the rough personality and memories of Barack Obama. Could it really be said that it’s still the same identity?
Why is an uploaded mind necessarily linked by an edge to the original mind? If the uploaded mind is less than perfect (and it probably will be; even if it’s off by one neuron, one bit, one atom) and you can still link that with an edge to the original mind, what’s to say you couldn’t link a very, very dodgy ‘clone’ mind, like for example the mind of a completely different human, via an edge, to the original mind/vertex?
Some other notes: firstly, an exact clone of a mind is the same mind. This pretty much makes sense. So you can get away from issues like ‘if I clone your mind, but then torture the clone, do you feel it?’ Well, if you’ve modified the state of the cloned mind by torturing it, it can no longer be said to be the same mind, and we would both presumably agree that me cloning your mind in a far away world and then torturing the clone does not make you experience anything.
Why would us launching a simulation use more processing power? It seems more likely that the universe does a set amount of information processing and all we are doing is manipulating that in constructive ways. Running a computer doesn’t process more information than the wind blowing against a tree does; in fact, it processes far less.
What you are saying doesn’t follow from the premises, and is about as accurate as me saying that magic exists and Harry Potter casts a spell on too-advanced civilisations.
You have to consider that humans don’t have perfect utility functions. Even if I want to be a moral utilitarian, it is a fact that I am not. So I have to structure my life around keeping myself as morally utilitarian as possible. Brian Tomasik talks about this. It might be true that I could reduce more suffering by not eating an extra donut, but I’m going to give up on the entire task of being a utilitarian if I can’t allow myself some luxuries.
I think I agree with what you’re saying for the most part. If your goal is, say, reducing suffering, then you have to consider the best way of convincing others to share your goal. If you started killing people who ran factory farms, you’re probably going to turn a lot of the world against you, and so fail in your goal. And, you have to consider the best way of convincing yourself to continue performing your goal, now and into the future, since humans goals can change depending on circumstances and experiences.
In terms of guilt, finding little tricks to rid yourself of guilt for various things probably isn’t a good way to make you continue caring and doing as much as you can for a certain issue. I can know that something is wrong, but if I don’t feel guilty about doing nothing, I’m probably not going to exert myself as hard in trying to fix it. If I can tell myself “I didn’t do it, therefore it’s none of my concern, even though it is technically a bad thing” and absolve myself of guilt, it’s simply going to make me less likely to do anything about the issue.
The “simulation argument” by Bostrom is flawed. It is wrong. I don’t understand why a lot of people seem to believe in it. I might do a write up of this if anyone agrees with me, but basically, you cannot reason about without our universe from within our universe. It doesn’t make sense to do so. The simulation argument is about using observations from within our own reality to describe something outside our reality. For example, simulations are or will be common in this universe, therefore most agents will be simulated agents, therefore we are simulated agents. However, the observation that most agents will eventually be or already are simulated only applies in this reality/universe. If we are in a simulation, all of our logic will not be universal but instead will be a reaction to the perverted rules set up by the simulation’s creators. If we’re not in a simulation, we’re not in a simulation. Either way, the simulation argument is flawed.
No. Think about what sort of conclusions an AI in a game we make would come to about reality. Pretty twisted, right?
I am taking issue with the conclusion that we are living in a simulation even given premise (1) and (2) being true.
So I am struggling to understand his reply to my argument. In some ways it simply looks like he’s saying either we are in a simulation or we are not, which is obviously true. The claim that we are probably living in a simulation (given a couple of assumptions) relies on observations of the current universe, which either are not reliable if we are in a simulation, or obviously are wrong if we aren’t in a simulation.
If I conclude that there are more simulated minds than real minds in the universe, I simply do not think that implies that I am probably a simulated mind.
If we are not in a simulation, then the reasoning he uses does apply, so his conclusion is still true.
He’s saying that (3) doesn’t hold if we are not in a simulation, so either (1) or (2) is true. He’s not saying that if we’re not in a simulation, we somehow are actually in a simulation given this logic.
We could have random number generators that choose the geometry an agent in our simulation finds itself in every time it steps into a new room. We could make the agent believe that when you put two things together and group them, you get three things. We could add random bits to an agent’s memory.
There is no limit to how perverted a view of the world a simulated agent could have.
(1) and (2) are not premises. The conclusion of his argument is that either (1), (2) or (3) is very likely true. The argument is not supposed to show that we are living in a simulation.
The negation of (1) and (2) are premises if the conclusion is (3). So when I say they are “true” I mean that, for example, in the first case, that humans WILL reach an advanced level of technological development. Probably a bit confusing, my mistake.
You seem to be saying that (2) is true—that it is very unlikely that our post-human descendants will create a significant number of highly accurate simulations of their descendants.
I think Bostrom’s argument applies even if they aren’t “highly accurate”. If they are simulated at all, you can apply his argument. I think the core of his argument is that if simulated minds outnumber “real” minds, then it’s likely we are all simulated. I’m not really sure how us being “accurately simulated” minds changes things. It does make it easier to reason outside of our little box—if we are highly accurate simulations then we can actually know a lot about the real universe, and in fact studying our little box is pretty much akin to studying the real universe.
This, I think, is a possible difference between your position and Bostrom’s. You might be denying the Self-Sampling Assumption, which he accepts, or you might be arguing that simulated and unsimulated minds should not be considered part of the same reference class for the purposes of the SSA, no matter how similar they may be (this is similar to a point I made a while ago about Boltzmann brains in this rather unpopular post).
Let’s assume I’m trying to make conclusions about the universe. I could be a brain in a vat, but there’s not really anything to be gained in assuming that. Whether it’s true or not, I may as well act as if the universe can be understood. Let’s say I conclude, from my observations about the universe, that there are many more simulated minds than non-simulated minds. Does it then follow that I am probably a simulated mind? Bostrom says yes. I say no, because my reasoning about the universe that led me to the conclusion that there are more simulated minds than non-simulated ones is predicated on me not being a simulated mind. I would almost say it’s impossible to reason your way into believing you’re in a simulation. It’s self-referential.
I’m going to have to think about this harder, but try and criticise what I’m saying as you have been doing because it certainly helps flesh things out in my mind.
If you define yourself by the formal definition of a general intelligence then you’re probably not going to go too far wrong.
That’s what your theory ultimately entails. You are saying that you should go from specific labels (“I am a democrat”) to more general labels (” I am a seeker of accurate world models”) because it is easier to conform to a more general specification. The most general label would be a formal definition of what it means to think and act on an environment for the attainment of goals.
I don’t think your theory is particularly useful.