# Ege Erdil comments on My impression of singular learning theory

• You need to discretize the function before taking preimages. If you just take preimages in the continuous setting, of course you’re not going to see any of the interesting behavior SLT is capturing.

In your case, let’s say that we discretize the function space by choosing which one of the functions you’re closest to for some . In addition, we also discretize the codomain of by looking at the lattice for some . Now, you’ll notice that there’s a radius disk around the origin which contains only functions mapping to the zero function, and as our lattice has fundamental area this means the “relative weight” of the singularity at the origin is like .

In contrast, all other points mapping to the zero function only get a relative weight of where is the absolute value of their nonzero coordinate. Cutting off the domain somewhere to make it compact and summing over all to exclude the disk at the origin gives for the total contribution of all the other points in the minimum loss set. So in the limit the singularity at the origin accounts for almost everything in the preimage of . The origin is privileged in my picture just as it is in the SLT picture.

I think your mistake is that you’re trying to translate between these two models too literally, when you should be thinking of my model as a discretization of the SLT model. Because it’s a discretization at a particular scale, it doesn’t capture what happens as the scale is changing. That’s the main shortcoming relative to SLT, but it’s not clear to me how important capturing this thermodynamic-like limit is to begin with.

Again, maybe I’m misrepresenting the actual content of SLT here, but it’s not clear to me what SLT says aside from this, so...

• Everything I wrote in steps 1-4 was done in a discrete setting (otherwise is not finite and whole thing falls apart). I was intending to be pairs of floating point numbers and to be floats to floats.

However, using that I think I see what you’re trying to say. Which is that will equal zero for some cases where and are both non-zero but very small and will multiply down to zero due to the limits of floating point numbers. Therefore the pre-image of is actually larger than I claimed, and specifically contains a small neighborhood of .

That doesn’t invalidate my calculation that shows that is equally likely as though: they still have the same loss and -complexity (since they have the same macrostate). On the other hand, you’re saying that there are points in parameter space that are very close to that are also in this same pre-image and also equally likely. Therefore even if is just as likely as , being near to is more likely than being near to . I think it’s fair to say that that is at least qualitatively the same as SLT gives in the continous version of this.

However, I do think this result “happened” due to factors that weren’t discussed in your original post, which makes it sound like it is “due to” -complexity. -complexity is a function of the macrostate, which is the same at all of these points and so does not distinguish between and at all. In other words, your post tells me which is likely while SLT tells me which is likely—these are not the same thing. But you clearly have additional ideas not stated in the post that also help you figure out which is likely. Until that is clarified, I think you have a mental theory of this which is very different from what you wrote.

• Sure, I agree that I didn’t put this information into the post. However, why do you need to know which is more likely to know anything about e.g. how neural networks generalize?

I understand that SLT has some additional content beyond what is in the post, and I’ve tried to explain how you could make that fit in this framework. I just don’t understand why that additional content is relevant, which is why I left it out.

As an additional note, I wasn’t really talking about floating point precision being the important variable here. I’m just saying that if you want -complexity to match the notion of real log canonical threshold, you have to discretize SLT in a way that might not be obvious at first glance, and in a way where some conclusions end up being scale-dependent. This is why if you’re interested in studying this question of the relative contribution of singular points to the partition function, SLT is a better setting to be doing it in. At the risk of repeating myself, I just don’t know why you would try to do that.

• In my view, it’s a significant philosophical difference between SLT and your post that your post talks only about choosing macrostates while SLT talks about choosing microstates. I’m much less qualified to know (let alone explain) the benefits of SLT, though I can speculate. If we stop training after a finite number of steps, then I think it’s helpful to know where it’s converging to. In my example, if you think it’s converging to , then stopping close to that will get you a function that doesn’t generalize too well. If you know it’s converging to then stopping close to that will get you a much better function—possibly exactly equally as good as you pointed out due to discretization.

Now this logic is basically exactly what you’re saying in these comments! But I think if someone read your post without prior knowledge of SLT, they wouldn’t figure out that it’s more likely to converge to a point near than near . If they read an SLT post instead, they would figure that out. In that sense, SLT is more useful.

I am not confident that that is the intended benefit of SLT according to its proponents, though. And I wouldn’t be surprised if you could write a simpler explanation of this in your framework than SLT gives, I just think that this post wasn’t it.