Suppose Everett is right: no collapse, just branching under decoherence. Here’s a thought experiment.
At time , Box A contains a rock and Box B contains a human. We open both boxes and let their contents interact freely with the environment—photons scatter, air molecules collide, etc… By time , decoherence has done its work.
Rock in Box A.
A rock is a highly stable, decohered object. Its pointer states (position, bulk properties) are very robust. When photons, air molecules, etc. interact with it, the redundant environmental record overwhelmingly favors a consistent description: “the rock is here, in this shape.” Across branches, the rock will look extremely similar at the macroscopic level. Microscopically (atom by atom), there will be tiny differences (different thermal phonons, rare scattering events), but they won’t affect the higher-level description.
Human in Box B.
A human is a complex, dynamically unstable system, with huge numbers of degrees of freedom coupled in chaotic ways (neural firing patterns, biochemistry, small fluctuations magnifying over time). Decoherence still stabilizes macroscopic pointer states (the human is “there”), but internally the branching proliferates much faster. At a superficial level (you open the box and see a person), the worlds look similar. At a fundamental/microscopic level, the worlds rapidly diverge — especially in brain states. A single ion channel opening or not can, milliseconds later, cascade into different neural firing patterns, ultimately leading to different subjective experiences.
Similarity Across Worlds.
Superficially both are consistent across branches. Fundamentally, the rock’s worlds remain tightly bunched; the human’s fan out chaotically. Hence, the rock is “thicker” across worlds than the human. It’s high level processes are less contingent on perturbations in the environment.
In Zurek’s Quantum Darwinism, similarity across worlds is captured by redundancy : the number of disjoint environmental fragments that each carry nearly complete information (within tolerance ) about a system’s pointer state. High (like for a rock’s position) means strong agreement across branches; low (like for a human’s microstates which unspool into contingent perceptions, decisions, etc.) means rapid divergence.
If redundancy measures how robustly some property of a system is copied into the environment, then you can treat it as a measure of “multiversal thickness.” Rocks are thick, humans thin.
For agents then, to be consistent across worlds is to maximize redundancy of certain states or policies.
At the microscopic (neuronal firing) level, chaos ensures our fine-grained branching diverges rapidly.
But at the coarse-grained, behavioral/policy level, adopting stable rules (e.g. “I will act according to principle X regardless of circumstances”) could force convergence at the level of behavior and function like an einselected pointer state: robust, redundantly arrived at, consistent across branches.
In other words: a policy followed under many micro-histories becomes thick in the Everettian sense, because the environment (and other observers) can redundantly infer it from multiple branches.
For artificial or future intelligences, this could be taken further. If an agent values a goal or principle (say, honesty, or maximizing knowledge), it could design itself so that this property is redundantly manifest across its many branching instantiations. This is analogous to engineering for einselection: choosing internal dynamics that make certain states/policies pointer-like, hence stable and redundantly observable across worlds. Philosophically, that makes such values or policies “thick across the multiverse” — they survive and propagate in more branches, becoming almost like invariants.
Possible Implications
In terms of personal identity, maybe what we care about is what persists across worlds, which is not the individual microstate, but the thick, redundant policies or principles you enact. Your “self” is most real where it is most redundantly recorded. Agents might have a sense of personal identity that encompasses their expression in multiple worlds.
If thickness correlates with lastingness/prominence across the multiverse, there seems to be a normative pull toward cultivating redundancy in values you endorse.
In AI design/safety, future intelligences might explicitly select for high redundancy of aligned values, ensuring they are robust pointer-states rather than fragile micro-fluctuations.
There’s tension here. Adopting universalized policies across environments increases multiversal thickness, but it also sacrifices one of agency’s strengths, i.e. the ability to adapt and switch strategies. An agent that rigidly echoes the same policy everywhere risks brittleness.
Perhaps the sweet spot is to preserve thickness only at the level of an idealized decision theory. This way, flexibility is maintained within branches and you expect to be robust insofar as your decision theory is good, but consistency/predictabilty holds across them.
On the other hand, pre-commitment is powerful. In some circumstances, an agent that knows it will act consistently across worlds can extract coordination benefits (with other agents, with itself in other branches, or even with its future selves). There may be decision theoretic suboptimal precommits that are nonetheless advantaged. In that sense, selective multiversal thickness could be a way to leverage redundancy for advantage.
Hi Linch, a really nice comment.
I’ve reached similar conclusions on the back of some work by Toby and Anders. I think model 3 is most likely though there could be some surprises that make it more like models 4-6.
It could be that even at tech maturity a maximum single hop distance is limiting, such that one has to e.g. ‘crawl’ along galactic filaments taking pit stops to replicate then continuing the journey to the next intermediate destination. Dust is a good candidate as a limiting factor. More likely the probability of mission success decreases with e.g. with the integral of the dust flux over the mission trajectory and so one can deflate the number of pit stops required via the redundancy of sending more probes. And the scaling there may be very good or very poor. The material cost of probe creation could be a consideration. In the case of the spam to all reachable targets strategy one can need variously millions to billions of probes depending the reachable radius. This could require the resources of multiple solar systems but this will depend on the design of advanced probes (which could be extraordinarily light but not totally clear imo) one may be able to make try some various assumptions and do some napkin math.
Another intuition boosting model aside from 3 to my mind is that the steepest scaling gradient for intelligence (probably of all time) is from essentially today to the near future e.g. within the next thousand years though I’d expect to get most of the way there in between 1-100 years.
For example a I think a Landauer-limited KII Dyson is about +22 OOMs in ops/s vs Earth’s (chips+humans) today and you’re getting ~13 of those OOMs from kardashev scaling. Expectation is that you saturate the tech ceiling as it pertains to probes quickly enough to overtake early launches to even nearby stars.
The densification of intelligence is perhaps underappreciated since here I vomit ops/s figure but there’s also algorithmic efficiency gains and generally many OOMs to be had in answering ‘what is the max intelligence I can get per unit energy.’ In terms of energy scale up after dyson you’ve then got another +10 in the galaxy that will take you on the order of 100,000 years to get to then about +10 more for the rest of the reachable universe but this will take you many billions of years so this is why I think you’ve got this steep ascension to tech maturity then a prolonged expansion thereafter.
Hopefully this makes sense. Lmk if you’d like to chat more about this. I have more thoughts but I’m so sleepy.