A standard description of Multiple Worlds is that every time a particle can do more than one thing, the timeline branches into two, with the particle doing X in one new branch and Y in the other. This produces a mental model of a tree of timelines, in which any given person is copied into innumerable future branches. In some of those branches, they die; in a few, they happen to keep living. This can lead to a mental model of the self where it makes sense to say something like, “I expect to be alive a century from now in 5% of the future.”
Multiple Interacting Worlds seems to posit a somewhat different model. Instead of timelines branching from each other, each timeline has always been separate from all the others; some have just been so nearly identical to each other that they’re indistinguishable until the point they finally start to diverge. In this model, you always have one future—all those other possible futures are lived by different near-copies of you. In this model, it seems to make more sense to say something like, “Given my current data, I estimate that there is a 5% chance that I am in one of the world-lines where I will still be alive a century from now.”
I have a strong suspicion that while the differences between the two models may seem irrelevant, there are sufficient edge cases where decisions would be made differently depending on the model used that it would be worth spending time considering the implications.
I have a strong suspicion that while the differences between the two models may seem irrelevant
At first glance, the two seem mathematically equivalent to me, and I think the only conceivable difference between them has to do with normalization differences. (The ‘number of worlds’ in the denominator is different and evolves differently, but the ‘fraction of worlds’ where any physical statement is true should always be the same between the two.)
I feel like the sibling comment gives some idea of that, but I’ll try to explain it more. If you have a collection of worlds, in order to get their probabilistic expectations to line up with experiment you need conditional fractions to hold: conditioned on having been in world A, I am in world B after t time with probability .5 and in world C after t time with probability .5. But the number of worlds that look like B is not constrained by the model, and whether the worlds are stored as “A” or the group of (“AB”, “AC”) also seems unconstrained (the nonexistence of local variables is different; it just constrains what a “world” can mean).
And so given the freedom over the number of worlds and how they’re stored, you can come up with a number of different interpretations that look mathematically equivalent to me, which hopefully also means they’re psychologically equivalent.
Well, one way to interpret the first model (Multiple Worlds) is that if you have 17 equally-weighted worlds and one splits, you get 18 equally-weighted worlds. This leads to some weird bias towards worlds where many particles have the chance to do many different things. Anecdotally, it’s also the way I was initially confused about all multiple-world models.
Once you introduce enough mathematical detail to rule out this confusion, you give every world a weight. At this point, there is no longer a difference between “A world with weight 1 splitting into two worlds with weight 0.5” and “Two parallel worlds with weight 0.5 each diverging”.
Well, one way to interpret the first model (Multiple Worlds) is that if you have 17 equally-weighted worlds and one splits, you get 18 equally-weighted worlds.
Right, but as you point out that’s confused because the worlds need to be weighted for it to predict correctly.
MW vs MIW mental models
A standard description of Multiple Worlds is that every time a particle can do more than one thing, the timeline branches into two, with the particle doing X in one new branch and Y in the other. This produces a mental model of a tree of timelines, in which any given person is copied into innumerable future branches. In some of those branches, they die; in a few, they happen to keep living. This can lead to a mental model of the self where it makes sense to say something like, “I expect to be alive a century from now in 5% of the future.”
Multiple Interacting Worlds seems to posit a somewhat different model. Instead of timelines branching from each other, each timeline has always been separate from all the others; some have just been so nearly identical to each other that they’re indistinguishable until the point they finally start to diverge. In this model, you always have one future—all those other possible futures are lived by different near-copies of you. In this model, it seems to make more sense to say something like, “Given my current data, I estimate that there is a 5% chance that I am in one of the world-lines where I will still be alive a century from now.”
I have a strong suspicion that while the differences between the two models may seem irrelevant, there are sufficient edge cases where decisions would be made differently depending on the model used that it would be worth spending time considering the implications.
At first glance, the two seem mathematically equivalent to me, and I think the only conceivable difference between them has to do with normalization differences. (The ‘number of worlds’ in the denominator is different and evolves differently, but the ‘fraction of worlds’ where any physical statement is true should always be the same between the two.)
If you don’t mind, could you go into a little more detail about the possible ‘normalization differences’ you mentioned?
I feel like the sibling comment gives some idea of that, but I’ll try to explain it more. If you have a collection of worlds, in order to get their probabilistic expectations to line up with experiment you need conditional fractions to hold: conditioned on having been in world A, I am in world B after t time with probability .5 and in world C after t time with probability .5. But the number of worlds that look like B is not constrained by the model, and whether the worlds are stored as “A” or the group of (“AB”, “AC”) also seems unconstrained (the nonexistence of local variables is different; it just constrains what a “world” can mean).
And so given the freedom over the number of worlds and how they’re stored, you can come up with a number of different interpretations that look mathematically equivalent to me, which hopefully also means they’re psychologically equivalent.
Well, one way to interpret the first model (Multiple Worlds) is that if you have 17 equally-weighted worlds and one splits, you get 18 equally-weighted worlds. This leads to some weird bias towards worlds where many particles have the chance to do many different things. Anecdotally, it’s also the way I was initially confused about all multiple-world models.
Once you introduce enough mathematical detail to rule out this confusion, you give every world a weight. At this point, there is no longer a difference between “A world with weight 1 splitting into two worlds with weight 0.5” and “Two parallel worlds with weight 0.5 each diverging”.
Right, but as you point out that’s confused because the worlds need to be weighted for it to predict correctly.