I wish there were an article that dealt more precisely with “Experience and Death,” rather than “identity and death,” because maintaining an experience is what really interests me. After all, we already don’t stay the same person from moment to moment. We acquire new neural structures, associations, and memories (and lose some old ones that we aren’t even aware of losing), and that doesn’t particularly bother me. So maintaining a particular “identity” does not seem to me to be the problem worth really worrying about.
In fact, let’s suppose that I was about to die due to some brain tumor, and there was one medical procedure that could save me, but it would entail destroying a lot of my existing memories, incidentally creating some new ones, re-arranging a lot of neural associations, and generally changing my whole personality in a drastic way.
If death and non-experience were not threatening me, then all other things being equal (meaning, assuming this personality will be relatively as happy and functional as my current self), my current self would NOT prefer to undergo this procedure (although the preference is not a particularly strong one. It’s more of an “avoiding Buridan’s Ass, risk-averse, all other things being equal, might as well stick with this current personality that I already know about” preference). However, if this procedure involving a lot of relatively neutral changes to my personality meant the difference between having future experiences of any sort and not having future experiences of any sort, then I would absolutely jump onboard with the procedure.
Let’s kick it up one notch further, though. Let’s say there’s a brain procedure that will change a lot of memories and associations in such a way that I will be a happier and more successful/functional person. Let’s say the procedure will raise my IQ by 100 points, increase my willpower, and so on. Then my current self would absolutely elect to undergo the procedure, even without being threatened with death otherwise.
When it comes to the classic teleporter thought-experiment, what really interests me is not the usual question people focus on of “will society recognize the duplicate as me,” or “will I ‘identify’ with my future duplicate self,” but rather, “will I experience my future duplicate self.” I do not want the teleporter to be a suicide machine as far as my first-person experience is concerned.
When people usually try to address this question, I often hear things like, “your first-person experience will continue if you want to identify with that duplicate” or statements that imply something similar. This just doesn’t make any sense to me.
Here’s why: imagine a teleporter experiment, except in this case when your first body steps into the teleporter chamber and gets vaporized, two duplicates get re-constructed in different neighboring rooms.
The first duplicate gets re-constructed in a torture chamber, where it will get tortured for the rest of its life.
The second duplicate gets re-constructed in a normal waiting room, gets handed a billion dollars, and is set free back into society.
Now, if it is at all possible, I would like to experience the experiences of the second duplicate. How can I make sure that that happens? From what I have read, people make it sound like it is as easy as making your pre-teleporter self pre-commit to not caring about duplicates of yourself that get materialized in the torture chamber rather than the waiting room.
That just doesn’t make sense to me. Normally reality doesn’t work like that. I can try to pre-commit to not caring about the pain signals coming from my finger before I smash it with a hammer, but pain I will feel nonetheless. Granted, as of now I don’t have full control over the self-modification of my own source code / nerves and neurons. If I did, I suppose I could re-program myself to not to feel pain or care about pain in that circumstance.
Still, this only goes so far. If I wanted to experience Neil deGrasse Tyson’s experience, or experience his brain (because maybe I perceived him as having higher IQ than me or more interesting memories than me or more wealth than me), I cannot just go to sleep tonight and pre-commit to caring only about Neil deGrasse Tyson and expect to wake up tomorrow morning experiencing Neil deGrasse Tyson’s reality, with all of his memories, feeling as if I had always been Neil deGrasse Tyson, with no memory of ever having experienced anything different.
Or maybe I can? How would I know that I have not repeatedly done this? How do I know that I did not just do this 5 seconds ago? I guess I don’t know. But...it just doesn’t FEEL LIKE I have.
Okay, NOW I am experiencing Matthew Opitz. And...NOW I am experiencing Matthew Opitz. And...NOW I am experiencing Matthew Opitz.
How could I seriously believe that I really just started experiencing Matthew Opitz after the 2nd “NOW” just now, and the first two “NOWs” are just false memories that I now have?
But still, it could be possible, when I look at the issue from the vantage point of this new NOW.
Really, when you think about it, the experience of time does not make any sense at all...
This dilemma could not possibly exist because: the map must be smaller than the territory. (Has this concept ever been formalized anywhere on lesswrong before? I can’t seem to find it. Maybe it should?).
Every time you add a scenario of the form:
“The subject is confronted with the evidence that his wife is also his mother, and additionally with the fact that this GLUT predicts he will do X”
Where the GLUT itself comes into play, you increase the scenarios that the GLUT must compute by one.
Now the GLUT needs to factor in not just situation “n,” but also situation “n + GLUT.” The GLUT that simulates this new “n + 1” complex of situations is now the new “GLUT-beta,” and it is a more-complex lookup table than the original GLUT.
So, yes, GLUT-beta can simulate “person + interaction with GLUT” and come up with a unique prediction X.
What GLUT-beta CANNOT DO is simulate “person + interaction with GLUT-beta” because “person + GLUT-beta” is more complex than “GLUT-beta” by itself, in the same way that “n + 1″ will always be larger than n. The lookup table that would simulate “person + interaction with GLUT-beta” would have to be an even more complex lookup table...call it “GLUT-gamma,” perhaps.
The problem is that a Giant Lookup Table is a map, but it is not a very compressed map. In fact, it is the least compressed map that is possible. It is a map that is already as complex as the territory that it is mapping.
A Giant Lookup Table is like making a 1:1 scale model of the Titanic. What you end up with is just the original Titanic. Likewise, a 1:1 map of England would be...a replica of England.
Now, a 1:1 Giant Lookup Table can still be useful because you can map from one medium to another. This is what a Giant Lookup Table does. If you think of learning a foreign language as learning a Giant Lookup Table of 1:1 vocabulary translations (which is not a perfect description of foreign language learning, but just bear with the thought-experiment for a moment), we can see how a Giant Lookup Table could still be useful even though it is not compressing the information at all.
So your 1:1 replica of the Titanic might be made out of cardboard rather than steel, and your 1:1 map of England might be made out of paper rather than dirt. But your 1:1 cardboard replica of the Titanic is still going to have to be hundreds of meters in length, and your 1:1 map of England is still going to have to be hundreds of kilometers across.
Now, let’s say you come to me wanting to create a 1:1 replica of “the Titanic + a 1:1 replica of the Titanic.” Okay, fine. You end up basically with two 1:1 replicas of the Titanic. This is our “GLUT-beta.”
Now, let’s say you demand that I make a 1:1 replica of “the Titanic + a 1:1 replica of the Titanic,” but you only have room in your drydock for 1 ship, so you demand that the result be compressed into the space of 1 Titanic. It cannot be done. You will have to make the drydock (the territory) larger, or you will have to compress the replicas somehow (in other words, go from a 1:1 replica to a 1:2 scale-model).
This is the same dilemma you get from asking the GLUT to simulate “person + GLUT.” Either the GLUT has to get bigger (in which case, it is no longer simulating “person + itself,” but rather, “person + earlier simpler version of itself”), or you have to replace the 1:1 GLUT with a more efficient (compressed) prediction algorithm.
Likewise, if you ask a computer to perfectly, quark-by-quark, in 1:1 fashion, simulate the entire universe, it can’t be done because the computer would have to simulate “itself + the rest of the universe,” and a 1:1 simulation of itself is already as big as itself, so it in order to simulate the rest of the universe, it has to get bigger, in which case it has more of itself to model, so that it must get bigger, in which case it has even more of itself to model, etc. in an infinite regress.
The map must always be less than or equal to the territory (and if the map is equal to the territory, then it is not really a “map” as we ordinarily use the term, but more like 1:1 scale model). So all of this can be simplified by saying:
The map must be smaller than the territory.
Or perhaps, this saying should be further refined to:
The map must be less complex than the territory.
That is because, after all, one could create a map of England that was twice as big as the original England. However, such a map would not be able to exceed the complexity of the original England. Everywhere in the original England where there was 1 quark, you would have 2 quarks on your map. You wouldn’t get increasingly complex configurations of quarks in the larger map of England. If you did, it would not be a faithful map of the original England.