Oddly enough, I was about to make a similar post, stating it this way:
“What questions would I have to answer, in order to convince you that I had either solved or dissolved the question of identity?”
The things that came up when I thought about it were:
Born probabilities
Continuity of consciousness
Ebborian splitting question.
Why aren’t I a Boltzmann brain?
On a side note—a possible confusion w.r.t. identity is: You-that-thinks is not necessarily the only thing that gets Utility-Weighting-of-You; if I clone someone perfectly, many people will care about the clone as much as about themselves, but the moment the clone thinks a different thought from them it’s not you-that-thinks anymore.
Could you clarify these in detail, please? You probably should have made the post rather than me- your version seems a lot better.
As for myself, I want to try and get philosophical ideas together to encodify a purely selfish agenda because I’m still considering whether I want to try and push my brain (as far as it will go, anyway) towards that or a more selfless one. Trying to find a way to encode a purely selfish agenda as a coherent philosophical system is an important part of that- which requires an idea of personal identity.
To be honest, part of the reason I waited on making the post was because I was confused about it myself :P. But nevertheless. The following questions probably need to be either answered or dissolved by a complete theory of identity/consciousness; the first is somewhat optional, but refusing to answer it shunts it onto physics where it becomes much stranger. I’m sure there are others questions, too—if nobody responds to this comment I’ll probably make a new post regardless.
Why the Born probabilities? Eliezer suggests that since the Born probabilities seem to be about finding ourselves in one possible universe versus another, it’s possible that they could be explained by a theory of consciousness. UDASSAtakes a crack at this, but I don’t understand the argument well enough to evaluate how well it does.
Continuity of consciousness. Part of the hard problem of consciousness—why do I wake up tomorrow as myself and not, oh, EY? Does falling asleep constitute “death”—is there any difference at all we can point to?
The Ebborian splitting question. The Ebborians are a thought experiment Eliezer came up with for the Sequences: they are a species of paper-thin people who replicate by growing in thickness and then splitting. Their brains are spread out across their whole body, so the splitting process necessarily requires the slow split of their brains while they are functioning. The question is: at what point are there “two” Ebborians?
Why aren’t I a Boltzmann Brain? - this one’s long, so I’m breaking it off.
A Boltzmann brain is a response to the argument that the entire universe might just be a momentary patch of order forming by pure chance on a sea of random events, and that therefore our memories never happened and the universe will probably fall apart in the next few seconds. The response is that it is far more likely that, rather than the entire universe coming together, only your brain spontaneously is created (and then dies moments later, as the physics it relies on to function doesn’t exist or are significantly different.) The response can be further generalized—the original argument requires something like a Tegmark IV multiverse that contains all universes that are mathematically consistent, but even in a Tegmark I multiverse (simply an infinite universe with random matter patterns in all directions) you would occasionally expect to see copies of your brain forming in empty space before dying in vacuum, and further that there would be many more of these Boltzmann brains than coherent versions of yourself.
And yet, it seems ludicrous to predict that in the next second, down will become purple and my liver will sprout legs and fly away by flapping them. Or that I will simply die in vacuum, for that matter. So… why aren’t I a Boltzmann brain?
I see no reason why a non-conscious machine, say a bayesian superintelligence, would not encounter the Born probabilities. As such, consciousness seems unlikely to be related to them—it’s too high-level to be related to quantum effects.
Continuity of consciousness. Part of the hard problem of consciousness—why do I wake up tomorrow as myself and not, oh, EY?
How do you define “I” that you can credibly imagine waking up as Eliezer? What difference do you expect in the experience of that Eliezer? I think it’s a bug in the human brain that you can even ask that question; I think it’s incoherent. You tomorrow is the entity that carries on all your memories, that is most affected by all of your decisions today; it is instrumentally useful to consider yourself tomorrow as continuous with yourself today.
Their brains are spread out across their whole body, so the splitting process necessarily requires the slow split of their brains while they are functioning. The question is: at what point are there “two” Ebborians?
This only is a problem if you insist on being able to count Ebborian individuals. I see no reason why the number of Ebborians shouldn’t start out as 1 at the point of split and quickly approach 2 via the real numbers as the experiences diverge. As humans we have no need to count individuals via the reals because in our case, individuals have always been cleanly and unambiguously differentiable; as such we are ill-equipped to consider this situation. I would be highly surprised if, when we actually encountered Ebborians, this question was in any way confusing to them. I suspect it would just be as intuitively obvious to them as counting individuals is to us now.
Why aren’t I a Boltzmann Brain?
That one seems hard but, again, would equally confound a non-conscious reasoning system. It sounds like you’re taking consciousness as the big mystery of the human experience and thus pin on it everything marginally related that seems too mysterious to answer otherwise.
I see no reason why a non-conscious machine, say a bayesian superintelligence, would not encounter the Born probabilities. As such, consciousness seems unlikely to be related to them—it’s too high-level to be related to quantum effects.
Can’t speak to this one, insufficient QM knowledge.
How do you define “I” that you can credibly imagine waking up as Eliezer? What difference do you expect in the experience of that Eliezer? I think it’s a bug in the human brain that you can even ask that question; I think it’s incoherent. You tomorrow is the entity that carries on all your memories, that is most affected by all of your decisions today; it is instrumentally useful to consider yourself tomorrow as continuous with yourself today.
Ehhh, point. That being said, it’s possible I’ve misunderstood the problem—because I’m pretty sure I’ve heard continuity referred to as a hard problem around here...
This only is a problem if you insist on being able to count Ebborian individuals. I see no reason why the number of Ebborians shouldn’t start out as 1 at the point of split and quickly approach 2 via the real numbers as the experiences diverge. As humans we have no need to count individuals via the reals because in our case, individuals have always been cleanly and unambiguously differentiable; as such we are ill-equipped to consider this situation. I would be highly surprised if, when we actually encountered Ebborians, this question was in any way confusing to them. I suspect it would just be as intuitively obvious to them as counting individuals is to us now.
Point.
That one seems hard but, again, would equally confound a non-conscious reasoning system. It sounds like you’re taking consciousness as the big mystery of the human experience and thus pin on it everything marginally related that seems too mysterious to answer otherwise.
It seems that in general I have conflated “thinking” with “consciousness”, when really the one is computation and the other is some aspect of it that I can’t really ask coherent questions of.
I’m not sure, but it seems to relate to what Eliezer highlighted in how an algorithm feels from inside; the way brains track concepts as separate from the components that define them. If you can imagine consciousness as something that persists as a first-order object, something separate from the brain—because it is hard to recognize “thinking” when looking at your brain—if you can see “I” as a concept distinct from the brain that you are, it makes sense to imagine “I wake up as Eliezer”; you just take the “I” object and reassign it to Eliezer’s brain. That’s why the sequences are so big on dissolving the question and looking at what experiences the concept actually makes you anticipate.
Afaics, the problem is hard not because of some intrinsic difficulty but because it requires us to recognize “ourselves” in our brains, and consciousness is so central to our experience that it’s hard to go up against the intuitions we have about it.
Oddly enough, I was about to make a similar post, stating it this way:
“What questions would I have to answer, in order to convince you that I had either solved or dissolved the question of identity?”
The things that came up when I thought about it were:
Born probabilities
Continuity of consciousness
Ebborian splitting question.
Why aren’t I a Boltzmann brain?
On a side note—a possible confusion w.r.t. identity is: You-that-thinks is not necessarily the only thing that gets Utility-Weighting-of-You; if I clone someone perfectly, many people will care about the clone as much as about themselves, but the moment the clone thinks a different thought from them it’s not you-that-thinks anymore.
Could you clarify these in detail, please? You probably should have made the post rather than me- your version seems a lot better.
As for myself, I want to try and get philosophical ideas together to encodify a purely selfish agenda because I’m still considering whether I want to try and push my brain (as far as it will go, anyway) towards that or a more selfless one. Trying to find a way to encode a purely selfish agenda as a coherent philosophical system is an important part of that- which requires an idea of personal identity.
To be honest, part of the reason I waited on making the post was because I was confused about it myself :P. But nevertheless. The following questions probably need to be either answered or dissolved by a complete theory of identity/consciousness; the first is somewhat optional, but refusing to answer it shunts it onto physics where it becomes much stranger. I’m sure there are others questions, too—if nobody responds to this comment I’ll probably make a new post regardless.
Why the Born probabilities? Eliezer suggests that since the Born probabilities seem to be about finding ourselves in one possible universe versus another, it’s possible that they could be explained by a theory of consciousness. UDASSA takes a crack at this, but I don’t understand the argument well enough to evaluate how well it does.
Continuity of consciousness. Part of the hard problem of consciousness—why do I wake up tomorrow as myself and not, oh, EY? Does falling asleep constitute “death”—is there any difference at all we can point to?
The Ebborian splitting question. The Ebborians are a thought experiment Eliezer came up with for the Sequences: they are a species of paper-thin people who replicate by growing in thickness and then splitting. Their brains are spread out across their whole body, so the splitting process necessarily requires the slow split of their brains while they are functioning. The question is: at what point are there “two” Ebborians?
Why aren’t I a Boltzmann Brain? - this one’s long, so I’m breaking it off.
A Boltzmann brain is a response to the argument that the entire universe might just be a momentary patch of order forming by pure chance on a sea of random events, and that therefore our memories never happened and the universe will probably fall apart in the next few seconds. The response is that it is far more likely that, rather than the entire universe coming together, only your brain spontaneously is created (and then dies moments later, as the physics it relies on to function doesn’t exist or are significantly different.) The response can be further generalized—the original argument requires something like a Tegmark IV multiverse that contains all universes that are mathematically consistent, but even in a Tegmark I multiverse (simply an infinite universe with random matter patterns in all directions) you would occasionally expect to see copies of your brain forming in empty space before dying in vacuum, and further that there would be many more of these Boltzmann brains than coherent versions of yourself.
And yet, it seems ludicrous to predict that in the next second, down will become purple and my liver will sprout legs and fly away by flapping them. Or that I will simply die in vacuum, for that matter. So… why aren’t I a Boltzmann brain?
I see no reason why a non-conscious machine, say a bayesian superintelligence, would not encounter the Born probabilities. As such, consciousness seems unlikely to be related to them—it’s too high-level to be related to quantum effects.
How do you define “I” that you can credibly imagine waking up as Eliezer? What difference do you expect in the experience of that Eliezer? I think it’s a bug in the human brain that you can even ask that question; I think it’s incoherent. You tomorrow is the entity that carries on all your memories, that is most affected by all of your decisions today; it is instrumentally useful to consider yourself tomorrow as continuous with yourself today.
This only is a problem if you insist on being able to count Ebborian individuals. I see no reason why the number of Ebborians shouldn’t start out as 1 at the point of split and quickly approach 2 via the real numbers as the experiences diverge. As humans we have no need to count individuals via the reals because in our case, individuals have always been cleanly and unambiguously differentiable; as such we are ill-equipped to consider this situation. I would be highly surprised if, when we actually encountered Ebborians, this question was in any way confusing to them. I suspect it would just be as intuitively obvious to them as counting individuals is to us now.
That one seems hard but, again, would equally confound a non-conscious reasoning system. It sounds like you’re taking consciousness as the big mystery of the human experience and thus pin on it everything marginally related that seems too mysterious to answer otherwise.
Can’t speak to this one, insufficient QM knowledge.
Ehhh, point. That being said, it’s possible I’ve misunderstood the problem—because I’m pretty sure I’ve heard continuity referred to as a hard problem around here...
Point.
It seems that in general I have conflated “thinking” with “consciousness”, when really the one is computation and the other is some aspect of it that I can’t really ask coherent questions of.
So… uh, what is the problem of consciousness?
I’m not sure, but it seems to relate to what Eliezer highlighted in how an algorithm feels from inside; the way brains track concepts as separate from the components that define them. If you can imagine consciousness as something that persists as a first-order object, something separate from the brain—because it is hard to recognize “thinking” when looking at your brain—if you can see “I” as a concept distinct from the brain that you are, it makes sense to imagine “I wake up as Eliezer”; you just take the “I” object and reassign it to Eliezer’s brain. That’s why the sequences are so big on dissolving the question and looking at what experiences the concept actually makes you anticipate.
Afaics, the problem is hard not because of some intrinsic difficulty but because it requires us to recognize “ourselves” in our brains, and consciousness is so central to our experience that it’s hard to go up against the intuitions we have about it.
Highlighting for you a section you missed, as I think it important:
“Does falling asleep constitute “death”—is there any difference at all we can point to?”