My impression of singular learning theory

Disclaimer: I’m by no means an expert on singular learning theory and what I present below is a simplification that experts might not endorse. Still, I think it might be more comprehensible for a general audience than going into digressions about blowing up singularities and birational invariants.

Here is my current understanding of what singular learning theory is about in a simplified (though perhaps more realistic?) discrete setting.

Suppose you represent a neural network architecture as a map where , is the set of all possible parameters of (seen as floating point numbers, say) and is the set of all possible computable functions from the input and output space you’re considering. In thermodynamic terms, we could identify elements of as “microstates” and the corresponding functions that the NN architecture maps them to as “macrostates”.

Furthermore, suppose that comes together with a loss function evaluating how good or bad a particular function is. Assume you optimize using something like stochastic gradient descent on the function with a particular learning rate.

Then, in general, we have the following results:

  1. SGD defines a Markov chain structure on the space whose stationary distribution is proportional to on parameters for some positive constant that depends on the learning rate. This is just a basic fact about the Langevin dynamics that SGD would induce in such a system.

  2. In general is not injective, and we can define the “-complexity” of any function as . Then, the probability that we arrive at the macrostate is going to be proportional to .

  3. When is some kind of negative log-likelihood, this approximates Solomonoff induction in a tempered Bayes paradigm—we raise likelihood ratios to a power - insofar as the -complexity is a good approximation for the Kolmogorov complexity of the function , which will happen if the function approximator defined by is sufficiently well-behaved.

The intuition for why we would expect (3) to be true in practice has to do with the nature of the function approximator . When is small, it probably means that we only need a small number of bits of information on top of the definition of itself to define , because “many” of the possible parameter values for are implementing the function . So is probably a simple function.

On the other hand, if is a simple function and is sufficiently flexible as a function approximator, we can probably implement the functionality of using only a small number of the bits in the codomain of , which leaves us the rest of the bits to vary as we wish. This makes quite large, and by extension the complexity quite small.

The vague concept of “flexibility” mentioned in the paragraph above requires to have singularities of many effective dimensions, as this is just another way of saying that the image of has to contain functions with a wide range of -complexities. If is a one-to-one function, this clean version of the theory no longer works, though if is still “close” to being singular (for instance, because many of the functions in its image are very similar) then we can still recover results like the one I mentioned above. The basic insights remain the same in this setting.

I’m wondering what singular learning theory experts have to say about this simplification of their theory. Is this explanation missing some important details that are visible in the full theory? Does the full theory make some predictions that this simplified story does not make?