I’d like to see Hutter’s model “translated” a bit to DNNs, e.g. by assuming they get anything right that’s within epsilon of a training data poing or something
With this assumption, asymptotically (i.e. with enough data) this becomes a nearest neighbor classifier. For the d-dimensional manifold assumption in the other model, you can apply the arguments from the other model to say that you scale as D−c/d for some constant c (probably c = 1 or 2, depending on what exactly we’re quantifying the scaling of).
I’m not entirely sure how you’d generalize the Zipf assumption to the “within epsilon” case, since in the original model there was no assumption on the smoothness of the function being predicted (i.e. [0, 0, 0] and [0, 0, 0.000001] could have completely different values.)
With this assumption, asymptotically (i.e. with enough data) this becomes a nearest neighbor classifier. For the d-dimensional manifold assumption in the other model, you can apply the arguments from the other model to say that you scale as D−c/d for some constant c (probably c = 1 or 2, depending on what exactly we’re quantifying the scaling of).
I’m not entirely sure how you’d generalize the Zipf assumption to the “within epsilon” case, since in the original model there was no assumption on the smoothness of the function being predicted (i.e. [0, 0, 0] and [0, 0, 0.000001] could have completely different values.)