Every system seems random from the inside

[Cross-posted from Grand, Unified, Crazy. This builds on a lot of stuff I wrote from before I was cross-posting to Less Wrong, but should be mostly intelligible on a general intuition of what a “computational system” is.]

I’ve been working on a post on predictions which has rather gotten away from me in scope. This is the first of a couple of building-block posts which I expect to spin out so I have things to reference when I finally make it to the main point. This post fits neatly into my old (2014!) sequence on systems theory and should be considered a belated addition to that.

Systems can be deterministic or random. A system that is random is, of course… random. I’m glad the difficult half of this essay is out of the way! Kidding aside, the interesting part is that from the inside, a system that is deterministic also appears random. This claim is technically a bit stronger than I can really argue, but it guides the intuition better than the more formal version.

Because no proper subsystem can perfectly simulate its parent, every inside-the-system simulation must ultimately exclude information, either via the use of lossy abstractions or by choosing to simulate only a proper, open subsystem of the parent. In either case, the excluded information effectively appears in the simulation as randomness: fundamentally unpredictable additional input.

This has some interesting implications if reality is a system and we’re inside it, as I believe to be the case. First it means that we cannot ever conclusively prove whether the universe is deterministic (a la Laplace’s Demon) or random. We can still make some strong probabilistic arguments, but a full proof becomes impossible.

Second, it means that we can safely assume the existence of “atomic randomness” in all of our models. If the system is random, then atomic randomness is in some sense “real” and we’re done. But if the system is deterministic, then we can pretend atomic randomness is real, because the information necessary to dispel that apparent randomness is provably unavailable to us. In some sense the distinction doesn’t even matter anymore; whether the information is provably unavailable or just doesn’t exist, our models look the same.