This is essentially the Tegmark ensemble multiverse which some very smart people take very seriously. You don’t need to consider yourself stupid for taking this seriously.
Maybe what I should really be asking is “how do I accept that I’m in a level 4 Tegmark world and still care about things getting created and destroyed within my visible universe?” The concept of measure might be an answer, and I haven’t studied it in detail, but I just intuitively doubt it’s going to add up to sanity if I go that route.
The next realization is that you probably live in many branches at once. After all, there’s not enough information in your mind-state to single out one branch precisely, so I guess you inhabit the “packet” of branches that differ by little enough that your mind-state is exactly the same in all of them.
And further, you should act according to your own understanding of which possible worlds you influence, even if in fact you influence a much smaller number of them, but you don’t know which ones.
Perhaps this is tangential, but I’m not keen on the idea that there’s a “mind-state” that can be either ‘exactly the same’ or not, because the boundaries of a mind are always going to be ‘fuzzy’. How far down from the diencephalon to the spinal cord do we have to go before “mind-state” supervenes on “brain state”? Surely any answer one gives here is arbitrary.
Does Leonard Shelby inhabit only those branches where his tattoos are as they are, or all branches containing ‘his body’ but perhaps with different tattoos? Suppose we say ‘well when he asks himself this, his tattoos aren’t part of the computation, so he belongs to all such branches’. That’s all well and good, but almost none of what we think of as “Shelby’s mind” is “part of the computation”. So perhaps ‘he’ is really everyone in the universe who has ever had that train of thought? But what exactly is ‘that train of thought’?
Yea, everyone who has the same train of thought are “the same person” according to many valid definitions. If they weren’t Eliazers solution to the prisoners dilemma wouldn’t work. I don’t see any problem with this.
I don’t see any problem with [the idea that everyone who has the same train of thought are “the same person”].
The problem is that the boundaries of a ‘train of thought’ (by which I really mean “the criteria for determining when two beings share the same train of thought”) are, if anything, even more perplexing than the boundaries of a mind.
Perhaps we can ignore these difficulties in particular decision problems by reasoning ‘updatelessly’, but answering the simple question “what am I?” (“what is a mental state?” “when does a system contain a ‘copy of my mind’?”) seems hopelessly out of reach.
It’s fuzzy and subjective, there is no “what is actually part of your mind” just “things that you consider part of your mind”. I don’t see a problem with this either.
I entirely agree with you, but notice what follows from this: Person X’s decision procedure (and his assignments of subjective probabilities, if we’re serious about the latter) ought not to have a “discontinuity” depending on whether some numerically distinct being Y is either “exactly the same” or “ever so slightly different” from X.
Sounds right, in most cases only the broadest strokes of the algorithm matters. For simple things with a low number of possible states like almost all game theory example the notion of personhood does not really have any use. There are however computations that use things like large swats of your memory or your entire visual field simultaneously, and those tends to be the ones were the concepts do matter.
I think it’s safe to say that the measure problem is still pretty wide open — not just at the compelling-but-speculative level IV, but even at lower levels that are pretty well-established as actually existing, as in the Born probabilities in MWI — so until that’s solved comprehensively enough that level IV multiverse itself either adds up to normality or adds up to confusion that can be dissolved, I don’t recommend deriving any apparently normality-defying conclusions from it, particularly sanity-impacting and morality-impacting ones. For now it’s just a fascinating problem to solve, and the kind of thinking that keeps leading people to invent the level IV multiverse appears to be a step in the right direction, but it’s by no means a complete solution — existence is still decidedly a confusing problem, and the nature and implications of a level IV multiverse are not yet clear enough that there really is any bullet to bite.
For what it is worth, if one does accept level 4 Tegmark it isn’t clear what measure is the correct one. It isn’t clear that it adds up to normality much less sanity.
If the universe is correctly described by Tegmark’s mathematical universe hypothesis, it’s still the universe.
Edit to add: Considering a philosophy should worry you only if you suspect that your motivation for caring about your visible universe is based on some principle or idea that is likely to be invalidated by that philosophy, if it proves correct. I don’t think such motivations have a large amount of crossover with multiverse-type theories.
how do I accept that… and still care about things getting created and destroyed within my visible universe?
OK, I’ll give it a shot… :-)
There will be mental entities in other universes also thinking about this, some using a similar decision process to you. Many of them will also have realized that there are others. Welcome to this club.
Those using the same decision theory will tend to make the same decision. So if you can commit to caring about your universe and trying to make it a better place, and go through with it, it’s more likely that other members of the club will too.
Save a googolplex universes here, a googolplex there, and eventually you’re talking real utility.
This is essentially the Tegmark ensemble multiverse which some very smart people take very seriously. You don’t need to consider yourself stupid for taking this seriously.
Maybe what I should really be asking is “how do I accept that I’m in a level 4 Tegmark world and still care about things getting created and destroyed within my visible universe?” The concept of measure might be an answer, and I haven’t studied it in detail, but I just intuitively doubt it’s going to add up to sanity if I go that route.
Two mantras that work for me (YMMV):
“Stop worrying about existence vs non-existence, and start worrying about accessible vs inaccessible”.
“Regardless of how many branches reality has, the one I live in is the one containing the consequences of my actions.”
The next realization is that you probably live in many branches at once. After all, there’s not enough information in your mind-state to single out one branch precisely, so I guess you inhabit the “packet” of branches that differ by little enough that your mind-state is exactly the same in all of them.
And further, you should act according to your own understanding of which possible worlds you influence, even if in fact you influence a much smaller number of them, but you don’t know which ones.
Well yeah. Branches have branches have branches. “Branches” are only a convenient approximation to a reality that is continuous along some dimensions.
Perhaps this is tangential, but I’m not keen on the idea that there’s a “mind-state” that can be either ‘exactly the same’ or not, because the boundaries of a mind are always going to be ‘fuzzy’. How far down from the diencephalon to the spinal cord do we have to go before “mind-state” supervenes on “brain state”? Surely any answer one gives here is arbitrary.
Some philosophers even speculate that you need to take into account Leonard Shelby’s tattoos in order to uniquely determine his mental state.
Does Leonard Shelby inhabit only those branches where his tattoos are as they are, or all branches containing ‘his body’ but perhaps with different tattoos? Suppose we say ‘well when he asks himself this, his tattoos aren’t part of the computation, so he belongs to all such branches’. That’s all well and good, but almost none of what we think of as “Shelby’s mind” is “part of the computation”. So perhaps ‘he’ is really everyone in the universe who has ever had that train of thought? But what exactly is ‘that train of thought’?
It gets difficult...
Yea, everyone who has the same train of thought are “the same person” according to many valid definitions. If they weren’t Eliazers solution to the prisoners dilemma wouldn’t work. I don’t see any problem with this.
The problem is that the boundaries of a ‘train of thought’ (by which I really mean “the criteria for determining when two beings share the same train of thought”) are, if anything, even more perplexing than the boundaries of a mind.
Perhaps we can ignore these difficulties in particular decision problems by reasoning ‘updatelessly’, but answering the simple question “what am I?” (“what is a mental state?” “when does a system contain a ‘copy of my mind’?”) seems hopelessly out of reach.
It’s fuzzy and subjective, there is no “what is actually part of your mind” just “things that you consider part of your mind”. I don’t see a problem with this either.
I entirely agree with you, but notice what follows from this: Person X’s decision procedure (and his assignments of subjective probabilities, if we’re serious about the latter) ought not to have a “discontinuity” depending on whether some numerically distinct being Y is either “exactly the same” or “ever so slightly different” from X.
Sounds right, in most cases only the broadest strokes of the algorithm matters. For simple things with a low number of possible states like almost all game theory example the notion of personhood does not really have any use. There are however computations that use things like large swats of your memory or your entire visual field simultaneously, and those tends to be the ones were the concepts do matter.
I think it’s safe to say that the measure problem is still pretty wide open — not just at the compelling-but-speculative level IV, but even at lower levels that are pretty well-established as actually existing, as in the Born probabilities in MWI — so until that’s solved comprehensively enough that level IV multiverse itself either adds up to normality or adds up to confusion that can be dissolved, I don’t recommend deriving any apparently normality-defying conclusions from it, particularly sanity-impacting and morality-impacting ones. For now it’s just a fascinating problem to solve, and the kind of thinking that keeps leading people to invent the level IV multiverse appears to be a step in the right direction, but it’s by no means a complete solution — existence is still decidedly a confusing problem, and the nature and implications of a level IV multiverse are not yet clear enough that there really is any bullet to bite.
For what it is worth, if one does accept level 4 Tegmark it isn’t clear what measure is the correct one. It isn’t clear that it adds up to normality much less sanity.
I think you are forgetting Egan’s Law.
If the universe is correctly described by Tegmark’s mathematical universe hypothesis, it’s still the universe.
Edit to add: Considering a philosophy should worry you only if you suspect that your motivation for caring about your visible universe is based on some principle or idea that is likely to be invalidated by that philosophy, if it proves correct. I don’t think such motivations have a large amount of crossover with multiverse-type theories.
OK, I’ll give it a shot… :-)
There will be mental entities in other universes also thinking about this, some using a similar decision process to you. Many of them will also have realized that there are others. Welcome to this club.
Those using the same decision theory will tend to make the same decision. So if you can commit to caring about your universe and trying to make it a better place, and go through with it, it’s more likely that other members of the club will too.
Save a googolplex universes here, a googolplex there, and eventually you’re talking real utility.