We do have empirical data which shows that the neural network “prior” is biased towards low-complexity functions, and some arguments for why it would make sense to expect this to be the case—see this new blog post, and my comment here. Essentially, low-complexity functions correspond to larger volumes in the parameter-space of neural networks. If functions with large volumes also have large basins of attraction, and if using SGD is roughly equivalent to going down a random basin (weighted by its size), then this would essentially explain why neural networks work.
I haven’t seen the paper you link, so I can’t comment on it specifically, but I want to note that the claim “SGD is roughly Bayesian” does not imply “Bayesian inference would give better generalisation than SGD”. It can simultaneously be the case that the neural network “prior” is biased towards low-complexity functions, that SGD roughly follows the “prior”, and that SGD provides some additional bias towards low-complexity functions (over and above what is provided by the “prior”). For example, if you look at Figure 6 in the post I link, you can see that different versions of SGD do provide a slightly different inductive bias. However, this effect seems to be quite small relative to what is provided by the “prior”.
We do have empirical data which shows that the neural network “prior” is biased towards low-complexity functions, and some arguments for why it would make sense to expect this to be the case—see this new blog post, and my comment here. Essentially, low-complexity functions correspond to larger volumes in the parameter-space of neural networks. If functions with large volumes also have large basins of attraction, and if using SGD is roughly equivalent to going down a random basin (weighted by its size), then this would essentially explain why neural networks work.
I haven’t seen the paper you link, so I can’t comment on it specifically, but I want to note that the claim “SGD is roughly Bayesian” does not imply “Bayesian inference would give better generalisation than SGD”. It can simultaneously be the case that the neural network “prior” is biased towards low-complexity functions, that SGD roughly follows the “prior”, and that SGD provides some additional bias towards low-complexity functions (over and above what is provided by the “prior”). For example, if you look at Figure 6 in the post I link, you can see that different versions of SGD do provide a slightly different inductive bias. However, this effect seems to be quite small relative to what is provided by the “prior”.