(Disclaimer: writing off the top of my head, haven’t thought too deeply about it, may be erroneous.)
Consider probability theory. In its context, we sometimes talk about different kinds of worlds we could be in, differing by the outcome of some random process. Does that mean we think those other parallel worlds are literal? Not necessarily. In some cases, that’s literally impossible; e. g., talking about worlds differing by something we’re logically uncertain about. I don’t know whether I live in a world in which the 10100th digit of pi is odd or even, but that doesn’t mean both worlds can exist.
Yet, talking about “different worlds” is still a useful framework/bit of verbiage.
Next: For whatever reason, the universe favors simplicity. Simple explanations, simple hypotheses, Occam’s razor. Or perhaps “simplicity” is just a term for “how much the universe prefers this thing”. In any case, if you want to make predictions about what happens, using some kind of simplicity prior is useful.
Anthropic reasoning is, in large part, just a way of reasoning about that simplicity prior. When I say that we’re more likely to be coarse agents because they “draw on reality fluid from an entire pool of low-level agents”, I do not necessarily mean that there are literal alternate realities populated by agents slightly different from each other at the low level, and that we-the-coarse-agents are somehow implemented on all of them simultaneously, much like there isn’t literally several worlds differing by the 10100th digit of pi.
Rather, what I mean is that, when we’re calculating “how much the universe will like this hypothesis about reality”, we would want to offset the raw complexity of the explanation by taking into account how many observationally equivalent explanations there are, if we want to compute the correct result. The universe likes simplicity, for whatever reason; being alt-simple is one way to be simple; therefore, hypotheses about reality in which we discover that we are “coarse” agents are favored by the generalized Occam’s razor. To do otherwise, to argue that we exist in some manner in which our low-level implementation is uniquely defined, goes against the razor; it’s postulating unnecessary details/entities.
But all of this is a mouthful, so it’s easier to stick to talking about different worlds. They may or may not be literally real, seems like a reasonable bit of metaphysics to me, but I don’t really know. It’s above our current paygrade anyway, I shut up and calculate.
What observation is better explained or predicted if one assumes Tegmark universes?
Tegmark IV is a way to think about the space of all mathematically possible universes over which we define the probability distribution regarding the structure of reality we are in. In large part, it’s a framework/metaphysical interpretation, and doesn’t promise to make predictions different from any other valid framework for talking about the space of all possible realities.
Quantum immortality/anthropic immortality is a separate assumption; Tegmark IV doesn’t assume it (as you point out, it’s coherent to imagine that you still only exist in a specific continuity in it) and it doesn’t require Tegmark IV (e. g., in a solipsism-like view, you can also assume that you will never stop receiving further observations, and thereby restrict the set of hypotheses about the universe to those where you will never die – all without ever assuming there’s more than one true universe).
What if my utility function is keeping alive and flourishing this one specific instance of myself, connected in an unbroken sequential chain to every previous instance?
Those words may not actually mean anything. Like, if you compute 5+7 = 11, there’s no meaningful sense in which you can pick out the “original” 5 out of that 11. You can if the elements being added up had some additional structure, if the equation is an abstraction over a more complex reality. But what if there is no such additional structure, if there’s just no machinery for picking out “this specific instance of yourself”?
Like, perhaps we are in a cleverly structured multi-level simulation that runs all Everett branches, and to save processing power, it functions as follows: first it computes the highest-level history of the world, then all second-highest-level histories consistent with that highest-level history, then all third-highest-level histories, et cetera; and suppose the bulk of our experiences is computed at some non-lowest level. In that case, you quite literally “exist” in all low-level histories consistent with your coarse high-level experiences; the bulk of those experiences was calculated first, then lower-level histories were generated “around” them, with some details added in. “Which specific low-level history do I exist in?” is then meaningless to ask.
Or maybe not, maybe the simulation hub we are in runs all low-level histories separately, and you actually are in one of them. How can we tell?
Probably neither of those, probably we don’t yet have any idea what’s really going on. Shrug.
(Disclaimer: writing off the top of my head, haven’t thought too deeply about it, may be erroneous.)
Consider probability theory. In its context, we sometimes talk about different kinds of worlds we could be in, differing by the outcome of some random process. Does that mean we think those other parallel worlds are literal? Not necessarily. In some cases, that’s literally impossible; e. g., talking about worlds differing by something we’re logically uncertain about. I don’t know whether I live in a world in which the 10100th digit of pi is odd or even, but that doesn’t mean both worlds can exist.
Yet, talking about “different worlds” is still a useful framework/bit of verbiage.
Next: For whatever reason, the universe favors simplicity. Simple explanations, simple hypotheses, Occam’s razor. Or perhaps “simplicity” is just a term for “how much the universe prefers this thing”. In any case, if you want to make predictions about what happens, using some kind of simplicity prior is useful.
Anthropic reasoning is, in large part, just a way of reasoning about that simplicity prior. When I say that we’re more likely to be coarse agents because they “draw on reality fluid from an entire pool of low-level agents”, I do not necessarily mean that there are literal alternate realities populated by agents slightly different from each other at the low level, and that we-the-coarse-agents are somehow implemented on all of them simultaneously, much like there isn’t literally several worlds differing by the 10100th digit of pi.
Rather, what I mean is that, when we’re calculating “how much the universe will like this hypothesis about reality”, we would want to offset the raw complexity of the explanation by taking into account how many observationally equivalent explanations there are, if we want to compute the correct result. The universe likes simplicity, for whatever reason; being alt-simple is one way to be simple; therefore, hypotheses about reality in which we discover that we are “coarse” agents are favored by the generalized Occam’s razor. To do otherwise, to argue that we exist in some manner in which our low-level implementation is uniquely defined, goes against the razor; it’s postulating unnecessary details/entities.
But all of this is a mouthful, so it’s easier to stick to talking about different worlds. They may or may not be literally real, seems like a reasonable bit of metaphysics to me, but I don’t really know. It’s above our current paygrade anyway, I shut up and calculate.
Tegmark IV is a way to think about the space of all mathematically possible universes over which we define the probability distribution regarding the structure of reality we are in. In large part, it’s a framework/metaphysical interpretation, and doesn’t promise to make predictions different from any other valid framework for talking about the space of all possible realities.
Quantum immortality/anthropic immortality is a separate assumption; Tegmark IV doesn’t assume it (as you point out, it’s coherent to imagine that you still only exist in a specific continuity in it) and it doesn’t require Tegmark IV (e. g., in a solipsism-like view, you can also assume that you will never stop receiving further observations, and thereby restrict the set of hypotheses about the universe to those where you will never die – all without ever assuming there’s more than one true universe).
Those words may not actually mean anything. Like, if you compute 5+7 = 11, there’s no meaningful sense in which you can pick out the “original” 5 out of that 11. You can if the elements being added up had some additional structure, if the equation is an abstraction over a more complex reality. But what if there is no such additional structure, if there’s just no machinery for picking out “this specific instance of yourself”?
Like, perhaps we are in a cleverly structured multi-level simulation that runs all Everett branches, and to save processing power, it functions as follows: first it computes the highest-level history of the world, then all second-highest-level histories consistent with that highest-level history, then all third-highest-level histories, et cetera; and suppose the bulk of our experiences is computed at some non-lowest level. In that case, you quite literally “exist” in all low-level histories consistent with your coarse high-level experiences; the bulk of those experiences was calculated first, then lower-level histories were generated “around” them, with some details added in. “Which specific low-level history do I exist in?” is then meaningless to ask.
Or maybe not, maybe the simulation hub we are in runs all low-level histories separately, and you actually are in one of them. How can we tell?
Probably neither of those, probably we don’t yet have any idea what’s really going on. Shrug.