3. How does that handle ontology shifts? Suppose that this symbolic-to-us language would be suboptimal for compactly representing the universe. The compression process would want to use some other, more “natural” language. It would spend some bits of complexity defining it, then write the world-model in it. That language may turn out to be as alien to us as the encodings NNs use.
The cheapest way to define that natural language, however, would be via the definitions that are the simplest in terms of the symbolic-to-us language used by our complexity-estimator. This rules out definitions which would look to us like opaque black boxes, such as neural networks.
I note that this requires a fairly strong hypothesis: the symbolic-to-us language apparently has to be interpretable no matter what is being explained in that language. It is easy to imagine that there exist languages which are much more interpretable than neural nets (EG, English). However, it is much harder to imagine that there is a language in which all (compressible) things are interpretable.
Python might be more readable than C, but some Python programs are still going to be really hard to understand, and not only due to length. (Sometimes terser programs are the more difficult to understand.)
Perhaps the claim is that such Python programs won’t be encountered due to relevant properties of the universe (ie, because the universe is understandable).
Perhaps the claim is that such Python programs won’t be encountered due to relevant properties of the universe (ie, because the universe is understandable).
I note that this requires a fairly strong hypothesis: the symbolic-to-us language apparently has to be interpretable no matter what is being explained in that language. It is easy to imagine that there exist languages which are much more interpretable than neural nets (EG, English). However, it is much harder to imagine that there is a language in which all (compressible) things are interpretable.
Python might be more readable than C, but some Python programs are still going to be really hard to understand, and not only due to length. (Sometimes terser programs are the more difficult to understand.)
Perhaps the claim is that such Python programs won’t be encountered due to relevant properties of the universe (ie, because the universe is understandable).
That’s indeed where some of the hope lies, yep!