To MIRI-style folk, you can’t simulate the universe from the beginning

To those whose formalisms still depend on simulating the universe from the beginning (those who don’t, ignore this post, I’m not trying to correct connectionism here):

It doesn’t matter how smart a being is; Even temporarily assuming superdeterminism[1], Laplace’s demon[2] still cannot fit in the universe it’s attempting to simulate—it’d be even more extreme than Maxwell’s demon[3], which is also forbidden even in a classical deterministic universe, I think. There’s cryptographic amounts of chaos-induced entropy in the way. Impressively cryptographic amounts, actually.

There aren’t simplifications that let you get around this, you have to simulate probabilistically to deal with the fact that you can never exactly guess what chaos did in the blurred-out details. Everything has sensitive dependence on initial conditions—the amount you can get around that by being smart is significant, but bumping a butterfly to intentionally change the course of history in a specific, intentional way ain’t ever happening without the bump transferring nanites, because the atmosphere is far too chaotic. You need active control to make small energy transfers accumulate into large ones when trying to do open-loop control through a system which is in a chaotic fluid regime, eg the atmosphere; you can’t pre-model it and bump it in ways that accumulate into perfection, there’s too much chaos and the slightest amount of noise in your sim will result in accumulation into error.

A key point here: it really doesn’t matter how smart you are. You can do all sorts of crazy stuff, don’t get me wrong, intelligence lets you simplify systems where there isn’t chaos. But you can’t simulate 100 copies of the universe in your head using 100 watts. or 100 gigawatts. or 100exawatts, and expect to get something that exactly matches. It cannot be done, no matter how big your brain is. There’s too much chaos.

Though there might be larger scale structures in things like life, since life is in fact built out of resistance to error accumulation. But it’s still noisy, and the maximum you can constrain claims about it is still not favorable towards formalisms that simulate from the beginning.

If this changes your plans at all

I’d suggest looking into whether you can generalize your formalism so it works on arbitrarily small or large patches of spacetime with boundary conditions, and then focus on how to ensure that you find the bubbles of spacetime that the ai and user are in assuming you’re starting inference from the middle of a patch of spacetime. Eg, Carado’s plan for how to do that seems like a starter, but it’s underestimating some problems. I’ve talked to her about it and made this claim directly. Just thought I’d also mention it in public.

Again, you can do ridiculously well at optimized cause-effect simulation, but if your formalism requires simulating from the beginning in order to identify a specific being, you’re going to have a bad time trying to make it useful to finite superintelligences, who can never simulate the universe from the beginning, even if they are very very large.

  1. ^

    From wikipedia, “Superdeterminism”:

    In quantum mechanics, superdeterminism is a loophole in Bell’s theorem. By postulating that all systems being measured are correlated with the choices of which measurements to make on them, the assumptions of the theorem are no longer fulfilled. A hidden variables theory which is superdeterministic can thus fulfill Bell’s notion of local causality and still violate the inequalities derived from Bell’s theorem.[1] This makes it possible to construct a local hidden-variable theory that reproduces the predictions of quantum mechanics, for which a few toy models have been proposed.[2][3][4] In addition to being deterministic, superdeterministic models also postulate correlations between the state that is measured and the measurement setting.

  2. ^

    From wikipedia, “Laplace’s demon”:

    In the history of science, Laplace’s demon was a notable published articulation of causal determinism on a scientific basis by Pierre-Simon Laplace in 1814.[1] According to determinism, if someone (the demon) knows the precise location and momentum of every atom in the universe, their past and future values for any given time are entailed; they can be calculated from the laws of classical mechanics.[2]

    Discoveries and theories in the decades following suggest that some elements of Laplace’s original writing are wrong or incompatible with our universe. For example, irreversible processes in thermodynamics suggest that Laplace’s “demon” could not reconstruct past positions and momenta from the current state.

  3. ^

    From wikipedia, “Maxwell’s demon”:

    Maxwell’s demon is a thought experiment that would hypothetically violate the second law of thermodynamics. It was proposed by the physicist James Clerk Maxwell in 1867.[1] In his first letter Maxwell called the demon a “finite being”, while the Daemon name was first used by Lord Kelvin.[2]

    In the thought experiment, a demon controls a small massless door between two chambers of gas. As individual gas molecules (or atoms) approach the door, the demon quickly opens and closes the door to allow only fast-moving molecules to pass through in one direction, and only slow-moving molecules to pass through in the other. Because the kinetic temperature of a gas depends on the velocities of its constituent molecules, the demon’s actions cause one chamber to warm up and the other to cool down. This would decrease the total entropy of the system, without applying any work, thereby violating the second law of thermodynamics.