Perhaps this is tangential, but I’m not keen on the idea that there’s a “mind-state” that can be either ‘exactly the same’ or not, because the boundaries of a mind are always going to be ‘fuzzy’. How far down from the diencephalon to the spinal cord do we have to go before “mind-state” supervenes on “brain state”? Surely any answer one gives here is arbitrary.
Does Leonard Shelby inhabit only those branches where his tattoos are as they are, or all branches containing ‘his body’ but perhaps with different tattoos? Suppose we say ‘well when he asks himself this, his tattoos aren’t part of the computation, so he belongs to all such branches’. That’s all well and good, but almost none of what we think of as “Shelby’s mind” is “part of the computation”. So perhaps ‘he’ is really everyone in the universe who has ever had that train of thought? But what exactly is ‘that train of thought’?
Yea, everyone who has the same train of thought are “the same person” according to many valid definitions. If they weren’t Eliazers solution to the prisoners dilemma wouldn’t work. I don’t see any problem with this.
I don’t see any problem with [the idea that everyone who has the same train of thought are “the same person”].
The problem is that the boundaries of a ‘train of thought’ (by which I really mean “the criteria for determining when two beings share the same train of thought”) are, if anything, even more perplexing than the boundaries of a mind.
Perhaps we can ignore these difficulties in particular decision problems by reasoning ‘updatelessly’, but answering the simple question “what am I?” (“what is a mental state?” “when does a system contain a ‘copy of my mind’?”) seems hopelessly out of reach.
It’s fuzzy and subjective, there is no “what is actually part of your mind” just “things that you consider part of your mind”. I don’t see a problem with this either.
I entirely agree with you, but notice what follows from this: Person X’s decision procedure (and his assignments of subjective probabilities, if we’re serious about the latter) ought not to have a “discontinuity” depending on whether some numerically distinct being Y is either “exactly the same” or “ever so slightly different” from X.
Sounds right, in most cases only the broadest strokes of the algorithm matters. For simple things with a low number of possible states like almost all game theory example the notion of personhood does not really have any use. There are however computations that use things like large swats of your memory or your entire visual field simultaneously, and those tends to be the ones were the concepts do matter.
Perhaps this is tangential, but I’m not keen on the idea that there’s a “mind-state” that can be either ‘exactly the same’ or not, because the boundaries of a mind are always going to be ‘fuzzy’. How far down from the diencephalon to the spinal cord do we have to go before “mind-state” supervenes on “brain state”? Surely any answer one gives here is arbitrary.
Some philosophers even speculate that you need to take into account Leonard Shelby’s tattoos in order to uniquely determine his mental state.
Does Leonard Shelby inhabit only those branches where his tattoos are as they are, or all branches containing ‘his body’ but perhaps with different tattoos? Suppose we say ‘well when he asks himself this, his tattoos aren’t part of the computation, so he belongs to all such branches’. That’s all well and good, but almost none of what we think of as “Shelby’s mind” is “part of the computation”. So perhaps ‘he’ is really everyone in the universe who has ever had that train of thought? But what exactly is ‘that train of thought’?
It gets difficult...
Yea, everyone who has the same train of thought are “the same person” according to many valid definitions. If they weren’t Eliazers solution to the prisoners dilemma wouldn’t work. I don’t see any problem with this.
The problem is that the boundaries of a ‘train of thought’ (by which I really mean “the criteria for determining when two beings share the same train of thought”) are, if anything, even more perplexing than the boundaries of a mind.
Perhaps we can ignore these difficulties in particular decision problems by reasoning ‘updatelessly’, but answering the simple question “what am I?” (“what is a mental state?” “when does a system contain a ‘copy of my mind’?”) seems hopelessly out of reach.
It’s fuzzy and subjective, there is no “what is actually part of your mind” just “things that you consider part of your mind”. I don’t see a problem with this either.
I entirely agree with you, but notice what follows from this: Person X’s decision procedure (and his assignments of subjective probabilities, if we’re serious about the latter) ought not to have a “discontinuity” depending on whether some numerically distinct being Y is either “exactly the same” or “ever so slightly different” from X.
Sounds right, in most cases only the broadest strokes of the algorithm matters. For simple things with a low number of possible states like almost all game theory example the notion of personhood does not really have any use. There are however computations that use things like large swats of your memory or your entire visual field simultaneously, and those tends to be the ones were the concepts do matter.