Excellent! Great to have a cleanly formulated article to point people to!
Matt Dellago
Good point! My intuition was that the Berkenstein bound (https://en.wikipedia.org/wiki/Bekenstein_bound) limits the amount of information in a volume. (Or more precisely the information surrounded by an area.) Therefore the number of states in a finite volume is also finite.
I must add: since writing this comment, a man called george pointed out to me that, when modeling the universe as a computation one must take care, to not accidentally derive ontological claims from it.
So today I would have a more ‘whatever-works-works’-attitude; UTMs, DFAs both just models, neither likely to be ontologically true.
Wow, thank you for the kind and thorough reply! Obviously there is much more to this, I’ll have a look at the report
Mirror Organisms Are Not Immune to Predation
I first heard this idea from Joscha Bach, and it is my favorite explanation of free will. I have not heard it called as a ‘predictive-generative gap’ before though, which is very well formulated imo
Simplicity Priors are Tautological
Any non-uniform prior inherently encodes a bias toward simplicity. This isn’t an additional assumption we need to make—it falls directly out of the mathematics.
For any hypothesis , the information content is , which means probability and complexity have an exponential relationship:
This demonstrates that simpler hypotheses (those with lower information content) are automatically assigned higher probabilities. The exponential relationship creates a strong bias toward simplicity without requiring any special mechanisms.
The “simplicity prior” is essentially tautological—more probable things are simple by definition.
I would be interested in seeing those talks, can you maybe share links to these recordings?
Very good work, thank you for sharing!
Intuitively speaking, the connection between physics and computability arises because the coarse-grained dynamics of our Universe are believed to have computational capabilities equivalent to a universal Turing machine [19–22].
I can see how this is a reasonable and useful assumption, but the universe seems to be finite in both space and time and therefore not a UTM. What convinced you otherwise?
Thank you! I’ll have a look!
Simplified the solomonoff prior is the distribution you get when you take a uniform distribution over all strings and feed them to a turing machine.
Since the outputs are also strings: What happens if we iterate this? What is the stationary distribution? Is there even one? The fixed points will be quines, programs that copy their source code to the output. But how are they weighted? By their length? Presumably you can also have quine-cycles of programs that generate each other in turn, in a manner reminiscent metagenesis. Do these quine cycles capture all probability mass or does some diverge?
Very grateful for answers and literature suggestions.
“Many parts of the real world we care about just turn out to be the efficiently predictable.”
I had a dicussion about exactly these ‘pockets of computational reducibility’ today. Whether they are the same as the more vague ‘natural abstractions’, and if there is some observation selection effect going on here.
Very nice! Alexander and I were thinking about this after our talk as well. We thought of this in terms of the kolmogorov structure function and I struggled with what you call Claim 3, since the time requirements are only bounded by the busybeaver number. I think if you accept some small divergence it could work, I would be very interested to see.
Small addendum: The padding argument gives a lower bound of the multiplicity. Above it is bounded by the Kraft-McMillan inequality.
Interesting! I think the problem is dense/compressed information can be represented in ways in which it is not easily retrievable for a certain decoder. The standard model written in Chinese is a very compressed representation of human knowledge of the universe and completely inscrutable to me.
Or take some maximally compressed code and pass it through a permutation. The information content is obviously the same but it is illegible until you reverse the permutation.
In some ways it is uniquely easy to do this to codes with maximal entropy because per definition it will be impossible to detect a pattern and recover a readable explanation.
In some ways the compressibility of NNs is a proof that a simple model exists, without revealing a understandable explanation.
I think we can have (almost) minimal yet readable model without exponentially decreasing information density as required by LDCs.
Good points! I think we underestimate the role that brute force plays in our brains though.
Damn! Dark forest vibes, very cool stuff!
Reference for the sub collision: https://en.wikipedia.org/wiki/HMS_Vanguard_and_Le_Triomphant_submarine_collisionAnd here’s another one!
https://en.wikipedia.org/wiki/Submarine_incident_off_Kildin_IslandMight as well start equipping them with fenders at this point.
And 2050 basically means post-AGI at this point. ;)
Great write up Alex!
I wonder how well the transparent battlefied translates to the naval setting.
1. Detection and communication through water is significantly harder than air, requiring shorter distances.
2. Surveilling a volume scales worse than a surface.Am I missing something or do you think drones will just scale anyway?
I don’t know if that is a meaningful question.
Consider this: a cube is something that is symmetric under the octahedral group—that’s what *makes* it a cube. If it wasn’t symmetric under these transformations, it wouldn’t be a cube. So also with spacetime—it’s something that transforms according to the Poincaré group (plus some other mathematical properties, metric etc.). That’s what makes it spacetime.
I’ll bet you! ;)
Sadly my claim is somewhat unfalsifiable because the emergence might always be hiding at some smaller scale, but I would be surprised if we find the theory that the standard model emerges from and it’s contains classical spacetime.
I did a little search, and if it’s worth anything Witten and Wheeler agree: https://www.quantamagazine.org/edward-witten-ponders-the-nature-of-reality-20171128/ (just search for ‘emergent’ in the article)
The Red Queen’s Race in Weight Space
In evolution we can tell a story that not only are genes selected for their function, but also for how easily modifiable they are. For example, having a generic antibiotic gene is much more useful than having an antibiotic locked into one target and far, in edit-distance terms, from any other useful variant.
Why would we expect the generic gene to be more common? There is selection pressure on having modifiable genes because environments are constantly shifting (the Red Queen hypothesis). Genes are modules with evolvability baked in by past selection.
Can we make a similar argument for circuits/features/modes in NNs? Obviously it is better to have a more general circuit, but can we also argue that “multitool circuits” are not only better at generalising but also more likely to be found?
SGD does not optimise loss but rather something like free energy, taking degeneracy (multiplicity) into account with some effective temperature.
But evolvability seems distinct from degeneracy. Degeneracy is a property of a single loss landscape, while evolvability is a claim about distribution shift. And the claim is not “I have low loss in the new distribution” but rather “I am very close to a low-loss solution of the new distribution.”
Degeneracy in ML ≈ mutational robustness in biology, which is straightforward, but that is not what I am pointing at here. Evolvability is closer to out-of-distribution adaptivity: the ability to move quickly into a new optimum with small changes.
Are there experiments where a model is trained on a shifting distribution?
Is the shifting distribution relevant or can this just as well be modeled as a mixture of the distributions, and what we think of as OOD is actually in the mixture distribution? In that case degeneracy is all you need.
Related ideas: cryptographic one-way functions (examples of unevolvable designs), out-of-distribution generalisation, mode connectivity.