Personal identity is a topic that I associate with a big tangled ball of yarn—you have to spend a long time untangling it to get to something that resembles truth.
Well, I started doing this intellectual work and got stuck on one question that doesn’t seem to be discussed very often (at least, I haven’t encountered it in the exact framing I’m going to present here).
So, imagine we copy a person named Roger, and we do it with absolutely perfect accuracy—down to the quantum properties of every particle that makes up his body. Or, if you prefer, with any arbitrarily high accuracy.
Now we have two Rogers—R1 and R2. And it seems obvious that they do not share one and the same consciousness: they are two different consciousnesses of identical personalities.
(If this isn’t obvious to you, I’d genuinely like to hear how two brains, isolated from each other, could share a single consciousness. Obviously, I’m a physicalist.)
Because the copied person is located at a different point in space, by definition he:
• cannot be physically connected to the original person (R1)
• will instantly begin to have different experiences and will stop being an absolutely exact copy (since even R2 being 10 cm away from R1 is already a pretty significant difference in experience).
In our reality, if object A and object B occupy the exact same point, they are the same object.
So it’s impossible to place a copy at the same spatial address as the original. Doing that would mean doing nothing.
Now, we have two Rogers. And they do not share the same subjective experience. Then what grounds do we have to believe that a teleportation machine really “transports us,” rather than transporting “someone else”?
Yes, it’s a perfect copy. As I said, we can imagine it even preserves the properties of the elementary particles that make up Roger’s body. But that exact copy—which truly is Roger—still contains another, new consciousness.
Right before writing this post, I also read a post by Rob Bensinger where he discusses a similar situation.
If I understood Rob correctly, his view is that if R2 feels that he “is Roger,” and he really is a sufficiently accurate copy—then why should we care?
Rob says that if it turned out we die and are reborn every second, while still feeling like the same person, nothing would change.
He writes:
If you’d grown up using using teleporters all the time, then it would seem just as unremarkable as stepping through a doorway.
If a philosopher then came to you one day and said “but WHAT IF something KILLS YOU every time you step through a door and then a NEW YOU comes into existence on the other side!”, you would just roll your eyes. If it makes no perceptible difference, then wtf are we even talking about?
But I think we can push back on that perspective.
Imagine someone offers you a deal: you are forced to kill your entire family, but then your memories of it are erased and an illusion is created in which your loved ones are alive and happy. Would you agree to that?
Let’s consider another version of the question—at least to me they seem similar (oh god, and here come identity problems again!):
If Rob Bensinger were offered this: your body will be completely destroyed, but in exchange a being will be created with an incredibly stable conviction that it is Rob Bensinger, who never died and was never destroyed—what is the probability that the real Rob Bensinger would accept?
(In fact, I suspect that probability is high.)
You might want to say these examples aren’t the same. Sure, they aren’t identical—but I think they’re pointing at the same thing: for us humans, what often matters is what actually happened, not just how we experience or interpret the event. I think this even connects to the very idea of rationality.
In both examples, you can choose a “sweet lie” over a “bitter truth,” even though on the surface the examples really do look quite different.
I would not agree to be destroyed, even if I knew that an exact copy of me would exist, convinced that it never died. Because I know that my own subjective experience would end, and the experience of “another me” would begin.
So why am I asking this?..
Imagine a resurrection machine. Suppose it’s an unimaginably powerful, gigantic computer—something like a Matrioshka brain around WR-102. Based on indirect evidence, this machine has computed the exact structure of the brain of our deceased Roger (let’s call him R1D1, where D stands for Dead).
The machine recreates an exact copy of Roger, R2D2. Yes, it really is him! There’s no doubt it’s the same personality.
But how can I be confident that… it’s the same conscious experience?
That it’s the same consciousness that ceased to exist at the moment R1D1 died?
I call this idea “perfect resurrection,” because we could stop at resurrecting a sufficiently similar, or even a perfect, copy of Roger. But what interests me (almost purely in a philosophical sense) is whether it’s possible to bring back the original consciousness itself, rather than a “second instance” of that consciousness.
So that Roger would say:
“Oh wow, what a day. I felt terrible, and now I woke up completely healthy. What the hell?”
And it would be literally true.
So that this wouldn’t just be a special case of my earlier proposal: “destroy Rob Bensinger and create a being that’s convinced it is Rob and never died.”
Perhaps—even hypothetically—there can be no way to verify whether it is “the same” consciousness. Perhaps I made some serious mistake in my reasoning.
In any case, I hope these thoughts will give many of you an opportunity to exercise your imagination and thinking. And of course, I’ll be doing that along with you.
A Perfect Resurrection
Personal identity is a topic that I associate with a big tangled ball of yarn—you have to spend a long time untangling it to get to something that resembles truth.
Well, I started doing this intellectual work and got stuck on one question that doesn’t seem to be discussed very often (at least, I haven’t encountered it in the exact framing I’m going to present here).
So, imagine we copy a person named Roger, and we do it with absolutely perfect accuracy—down to the quantum properties of every particle that makes up his body. Or, if you prefer, with any arbitrarily high accuracy.
Now we have two Rogers—R1 and R2. And it seems obvious that they do not share one and the same consciousness: they are two different consciousnesses of identical personalities.
(If this isn’t obvious to you, I’d genuinely like to hear how two brains, isolated from each other, could share a single consciousness. Obviously, I’m a physicalist.)
Because the copied person is located at a different point in space, by definition he:
• cannot be physically connected to the original person (R1)
• will instantly begin to have different experiences and will stop being an absolutely exact copy (since even R2 being 10 cm away from R1 is already a pretty significant difference in experience).
In our reality, if object A and object B occupy the exact same point, they are the same object.
So it’s impossible to place a copy at the same spatial address as the original. Doing that would mean doing nothing.
Now, we have two Rogers. And they do not share the same subjective experience. Then what grounds do we have to believe that a teleportation machine really “transports us,” rather than transporting “someone else”?
Yes, it’s a perfect copy. As I said, we can imagine it even preserves the properties of the elementary particles that make up Roger’s body. But that exact copy—which truly is Roger—still contains another, new consciousness.
Right before writing this post, I also read a post by Rob Bensinger where he discusses a similar situation.
If I understood Rob correctly, his view is that if R2 feels that he “is Roger,” and he really is a sufficiently accurate copy—then why should we care?
Rob says that if it turned out we die and are reborn every second, while still feeling like the same person, nothing would change.
He writes:
But I think we can push back on that perspective.
Imagine someone offers you a deal: you are forced to kill your entire family, but then your memories of it are erased and an illusion is created in which your loved ones are alive and happy. Would you agree to that?
Let’s consider another version of the question—at least to me they seem similar (oh god, and here come identity problems again!):
If Rob Bensinger were offered this: your body will be completely destroyed, but in exchange a being will be created with an incredibly stable conviction that it is Rob Bensinger, who never died and was never destroyed—what is the probability that the real Rob Bensinger would accept?
(In fact, I suspect that probability is high.)
You might want to say these examples aren’t the same. Sure, they aren’t identical—but I think they’re pointing at the same thing: for us humans, what often matters is what actually happened, not just how we experience or interpret the event. I think this even connects to the very idea of rationality.
In both examples, you can choose a “sweet lie” over a “bitter truth,” even though on the surface the examples really do look quite different.
I would not agree to be destroyed, even if I knew that an exact copy of me would exist, convinced that it never died. Because I know that my own subjective experience would end, and the experience of “another me” would begin.
So why am I asking this?..
Imagine a resurrection machine. Suppose it’s an unimaginably powerful, gigantic computer—something like a Matrioshka brain around WR-102. Based on indirect evidence, this machine has computed the exact structure of the brain of our deceased Roger (let’s call him R1D1, where D stands for Dead).
The machine recreates an exact copy of Roger, R2D2. Yes, it really is him! There’s no doubt it’s the same personality.
But how can I be confident that… it’s the same conscious experience?
That it’s the same consciousness that ceased to exist at the moment R1D1 died?
I call this idea “perfect resurrection,” because we could stop at resurrecting a sufficiently similar, or even a perfect, copy of Roger. But what interests me (almost purely in a philosophical sense) is whether it’s possible to bring back the original consciousness itself, rather than a “second instance” of that consciousness.
So that Roger would say:
“Oh wow, what a day. I felt terrible, and now I woke up completely healthy. What the hell?”
And it would be literally true.
So that this wouldn’t just be a special case of my earlier proposal: “destroy Rob Bensinger and create a being that’s convinced it is Rob and never died.”
Perhaps—even hypothetically—there can be no way to verify whether it is “the same” consciousness. Perhaps I made some serious mistake in my reasoning.
In any case, I hope these thoughts will give many of you an opportunity to exercise your imagination and thinking. And of course, I’ll be doing that along with you.