I don’t see any problem with [the idea that everyone who has the same train of thought are “the same person”].
The problem is that the boundaries of a ‘train of thought’ (by which I really mean “the criteria for determining when two beings share the same train of thought”) are, if anything, even more perplexing than the boundaries of a mind.
Perhaps we can ignore these difficulties in particular decision problems by reasoning ‘updatelessly’, but answering the simple question “what am I?” (“what is a mental state?” “when does a system contain a ‘copy of my mind’?”) seems hopelessly out of reach.
It’s fuzzy and subjective, there is no “what is actually part of your mind” just “things that you consider part of your mind”. I don’t see a problem with this either.
I entirely agree with you, but notice what follows from this: Person X’s decision procedure (and his assignments of subjective probabilities, if we’re serious about the latter) ought not to have a “discontinuity” depending on whether some numerically distinct being Y is either “exactly the same” or “ever so slightly different” from X.
Sounds right, in most cases only the broadest strokes of the algorithm matters. For simple things with a low number of possible states like almost all game theory example the notion of personhood does not really have any use. There are however computations that use things like large swats of your memory or your entire visual field simultaneously, and those tends to be the ones were the concepts do matter.
The problem is that the boundaries of a ‘train of thought’ (by which I really mean “the criteria for determining when two beings share the same train of thought”) are, if anything, even more perplexing than the boundaries of a mind.
Perhaps we can ignore these difficulties in particular decision problems by reasoning ‘updatelessly’, but answering the simple question “what am I?” (“what is a mental state?” “when does a system contain a ‘copy of my mind’?”) seems hopelessly out of reach.
It’s fuzzy and subjective, there is no “what is actually part of your mind” just “things that you consider part of your mind”. I don’t see a problem with this either.
I entirely agree with you, but notice what follows from this: Person X’s decision procedure (and his assignments of subjective probabilities, if we’re serious about the latter) ought not to have a “discontinuity” depending on whether some numerically distinct being Y is either “exactly the same” or “ever so slightly different” from X.
Sounds right, in most cases only the broadest strokes of the algorithm matters. For simple things with a low number of possible states like almost all game theory example the notion of personhood does not really have any use. There are however computations that use things like large swats of your memory or your entire visual field simultaneously, and those tends to be the ones were the concepts do matter.