Thanks, I had misunderstood Gurkenglas — I’m not used to thinking of a randomly intialized model as a bag of priors rather than a random starting point in a very high dimensional space or an incholate mess, but yes, under the analogy to Bayesian Inference it’s actually some sort of statistical approximation to a uniform prior (with, CLT informs us, a simplicity bias that approximates the Solomonoff one).
I was responding to Gurkenglas’ comment as I understood it, I agree your paper is not about this.
Thanks, I had misunderstood Gurkenglas — I’m not used to thinking of a randomly intialized model as a bag of priors rather than a random starting point in a very high dimensional space or an incholate mess, but yes, under the analogy to Bayesian Inference it’s actually some sort of statistical approximation to a uniform prior (with, CLT informs us, a simplicity bias that approximates the Solomonoff one).