Maybe if lots of noise is constantly being injected into the universe, this would change things. Because then the noise counts as part of the initial conditions. So the K-complexity of the universe-history is large, but high-level structure is common anyway because it’s more robust to that noise?
To summarize what the paper argues (from my post in that thread):
Suppose the microstate of a system is defined by a (set of) infinite-precision real numbers, corresponding to e. g. its coordinates in phase space.
We define the coarse-graining as a truncation of those real numbers: i. e., we fix some degree of precision.
That degree of precision could be, for example, the Planck length.
At the microstate level, the laws of physics may be deterministic and reversible.
At the macrostate level, the laws of physics are stochastic and irreversible. We define them as a Markov process, with transition probabilities P(x,y) defined as “the fraction of the microstates in the macrostate x that map to the macrostate y in the next moment”.
Over time, our ability to predict what state the system is in from our knowledge of its initial coarse-grained state + the laws of physics degrades.
Macroscopically, it’s because of the properties of the specific stochastic dynamic we have to use (this is what most of the paper is proving, I think).
Microscopically, it’s because ever-more-distant decimal digits in the definition of the initial state start influencing dynamics ever stronger. (See the multibaker map in Appendix A, the idea of “microscopic mixing” in a footnote, and also apparently Kolmogorov-Sinai entropy.)
That is: in order to better pinpoint farther-in-time states, we would have to spend more bits (either by defining more fine-grained macrostates, or maybe by locating them in the execution trace).
Thus: stochasticity, and the second law, are downstream of the fact that we cannot define the initial state with infinite precision.
I. e., it is effectively the case that there’s (pseudo)randomness injected into the state-transition process.
And if a given state has some other regularities by which it could be compactly defined, aside from defining it through the initial conditions, that would indeed decrease its description length/algorithmic entropy. So we again recover the “trajectories that abstract well throughout their entire history are simpler” claim.
Okay. I think this anthropic theory makes a falsifiable prediction (in principle). The infinite precision real numbers could be algorithmically simple, or they could be unstructured. The theory predicts that they are not algorithmically simple. If it were the case that they were algorithmically simple, we could run a solomonoff inductor on the macrostates and it would recover the full microstates (and this would probably be simpler than the abstraction-based compression).
Some new data on that point:
To summarize what the paper argues (from my post in that thread):
I. e., it is effectively the case that there’s (pseudo)randomness injected into the state-transition process.
And if a given state has some other regularities by which it could be compactly defined, aside from defining it through the initial conditions, that would indeed decrease its description length/algorithmic entropy. So we again recover the “trajectories that abstract well throughout their entire history are simpler” claim.
Okay. I think this anthropic theory makes a falsifiable prediction (in principle). The infinite precision real numbers could be algorithmically simple, or they could be unstructured. The theory predicts that they are not algorithmically simple. If it were the case that they were algorithmically simple, we could run a solomonoff inductor on the macrostates and it would recover the full microstates (and this would probably be simpler than the abstraction-based compression).