You’re applying a label, “I”, to a complex system. None of your definitions for “I” correctly describe the system. The “conflict between common sense and common sense” that you describe appears because you have conflicting simplistic interpretations of a complex system, not because there’s actually anything special going on with the complex system.
Multiple parallel brains with the same input and output are not currently covered by the standard concepts of “I” or “identity”, and reasoning about that kind of parallel brain using such concepts is going to be problematic.
The map is not the territory. Nothing to see here.
Would you say that your probability that the sun rises tomorrow is ill-defined, the map is not the territory, nothing to see here?
If so, I commend you for your consistency. If not, well, then there’s something going on here whether you like it or not.
EDIT: To clarify, this is not an argument from analogy; ill-definedness of probability spreads to other probabilities. So one can either use an implicit answer to the question of what “I exist” means physically, or an explicit one, but you cannot use no answer.
The reason for this unusual need to cross levels is because our introspective observations already start on the abstract level—they are substrate-independent.
The reason for this unusual need to cross levels is because our introspective observations already start on the abstract level—they are substrate-independent.
This looks like a good assumption to question. If we do attribute the thought “I need to bet on Heads” (sorry, but pun intended) to Manfred One, the “I” in that thought still refers to plain old Manfred, I’d say. Maybe I am not understanding what “substrate independent” is supposed to mean.
Suppose that my brain could be running in two different physical substrates. For example, suppose I could be either a human brain in a vat, or a simulation in a supercomputer. I have no way of knowing which, just from my thoughts. That’s substrate-independence—pretty straightforward.
The relevant application happens when I try to do an anthropic update—suppose I wake up and say “I exist, so let me go through my world-model and assign all events where I don’t exist probability 0, and redistribute the probability to the remaining events.” This is certainly a thing I should do—otherwise I’d take bets that only paid out when I didn’t exist :)
The trouble is, my observation (“I exist”) is at a different level of abstraction from my world model, and so I need to use some rule to tell me which events are compatible with my observation that I exist. This is the focus of the post.
If I could introspect on the physical level, not just the thought level, this complicated step would be unnecessary: I’d just say “I am physical system so and so, and since I exist I’ll update to only consider events consistent with that physical system existing.” But that super-introspection would, among its other problems, not be substrate-independent.
Oh, that kind of substrate independence. In Dennett’s story, an elaborate thought experiment has been constructed to make substrate independence possible. In the real world, your use of “I” is heavily fraught with substrate implications, and you know pretty well which physical system you are. Your “I” got its sense from the self-locating behavior and experiences of that physical system, plus observations of similar systems, i.e. other English speakers.
If we do a Sleeping Beauty on you but take away a few neurons from some of your successors and add some to other successors, the sizes of their heads doesn’t change the number of causal nexuses, which is the number of humans. Head size might matter insofar as it makes their experiences better or worse, richer or thinner. (Anthropic decision-making—which seems not to concern you here, but I like to keep it in mind, because some anthropic “puzzles” are thus helped.)
To clarify, I’m arguing that your post revolves entirely around your concepts of “I” and “people” (the map), and how those concepts fail to match up to a given thought experiment (the territory.) Sometimes concepts are close matches to scenarios, and you can get insight from looking at them; sometimes concepts are poor matches and you get garbage instead. Your post is a good example of the garbage scenario, and it’s not surpising that you have to put forth a lot of effort to pound your square pegs into non-square-shaped holes to make sense of it.
No, your last sentence did not make sense, and neither does the rest of that comment, hence my attempt to clarify. My best attempt at interpreting what you’re trying to say looks at this particular section:
‘an implicit answer to the question of what “I exist” means physically’
Where I immediately find the same problem I see in the original post: “I exist” doesn’t actually “mean” anything in this context, because you haven’t defined “I” in a way that is meaningful for this scenario.
For me personally, the answer to the question is pretty trivially clear because my definition of identity covers these cases: I exist anywhere that a sufficiently good simulation of me exists. In my personal sense of identity, the simulation doesn’t even have to be running, and there can be multiple copies of me which are all me and which all tag themselves with ‘I exist’.
With that in mind, when I read your post, I see you making an issue out of a trivial non-issue for no reason other than you’ve got a different definition of “I” and “person” than I do. When this happens, it’s a good sign that the issue is semantic, not conceptual.
You’re applying a label, “I”, to a complex system. None of your definitions for “I” correctly describe the system. The “conflict between common sense and common sense” that you describe appears because you have conflicting simplistic interpretations of a complex system, not because there’s actually anything special going on with the complex system.
Multiple parallel brains with the same input and output are not currently covered by the standard concepts of “I” or “identity”, and reasoning about that kind of parallel brain using such concepts is going to be problematic.
The map is not the territory. Nothing to see here.
Would you say that your probability that the sun rises tomorrow is ill-defined, the map is not the territory, nothing to see here?
If so, I commend you for your consistency. If not, well, then there’s something going on here whether you like it or not.
EDIT: To clarify, this is not an argument from analogy; ill-definedness of probability spreads to other probabilities. So one can either use an implicit answer to the question of what “I exist” means physically, or an explicit one, but you cannot use no answer.
The reason for this unusual need to cross levels is because our introspective observations already start on the abstract level—they are substrate-independent.
This looks like a good assumption to question. If we do attribute the thought “I need to bet on Heads” (sorry, but pun intended) to Manfred One, the “I” in that thought still refers to plain old Manfred, I’d say. Maybe I am not understanding what “substrate independent” is supposed to mean.
Suppose that my brain could be running in two different physical substrates. For example, suppose I could be either a human brain in a vat, or a simulation in a supercomputer. I have no way of knowing which, just from my thoughts. That’s substrate-independence—pretty straightforward.
The relevant application happens when I try to do an anthropic update—suppose I wake up and say “I exist, so let me go through my world-model and assign all events where I don’t exist probability 0, and redistribute the probability to the remaining events.” This is certainly a thing I should do—otherwise I’d take bets that only paid out when I didn’t exist :)
The trouble is, my observation (“I exist”) is at a different level of abstraction from my world model, and so I need to use some rule to tell me which events are compatible with my observation that I exist. This is the focus of the post.
If I could introspect on the physical level, not just the thought level, this complicated step would be unnecessary: I’d just say “I am physical system so and so, and since I exist I’ll update to only consider events consistent with that physical system existing.” But that super-introspection would, among its other problems, not be substrate-independent.
Oh, that kind of substrate independence. In Dennett’s story, an elaborate thought experiment has been constructed to make substrate independence possible. In the real world, your use of “I” is heavily fraught with substrate implications, and you know pretty well which physical system you are. Your “I” got its sense from the self-locating behavior and experiences of that physical system, plus observations of similar systems, i.e. other English speakers.
If we do a Sleeping Beauty on you but take away a few neurons from some of your successors and add some to other successors, the sizes of their heads doesn’t change the number of causal nexuses, which is the number of humans. Head size might matter insofar as it makes their experiences better or worse, richer or thinner. (Anthropic decision-making—which seems not to concern you here, but I like to keep it in mind, because some anthropic “puzzles” are thus helped.)
To clarify, I’m arguing that your post revolves entirely around your concepts of “I” and “people” (the map), and how those concepts fail to match up to a given thought experiment (the territory.) Sometimes concepts are close matches to scenarios, and you can get insight from looking at them; sometimes concepts are poor matches and you get garbage instead. Your post is a good example of the garbage scenario, and it’s not surpising that you have to put forth a lot of effort to pound your square pegs into non-square-shaped holes to make sense of it.
Did my last sentence in the edit make sense? We may have a misunderstanding.
No, your last sentence did not make sense, and neither does the rest of that comment, hence my attempt to clarify. My best attempt at interpreting what you’re trying to say looks at this particular section:
‘an implicit answer to the question of what “I exist” means physically’
Where I immediately find the same problem I see in the original post: “I exist” doesn’t actually “mean” anything in this context, because you haven’t defined “I” in a way that is meaningful for this scenario.
For me personally, the answer to the question is pretty trivially clear because my definition of identity covers these cases: I exist anywhere that a sufficiently good simulation of me exists. In my personal sense of identity, the simulation doesn’t even have to be running, and there can be multiple copies of me which are all me and which all tag themselves with ‘I exist’.
With that in mind, when I read your post, I see you making an issue out of a trivial non-issue for no reason other than you’ve got a different definition of “I” and “person” than I do. When this happens, it’s a good sign that the issue is semantic, not conceptual.