I confess that I have a weakness for slightly fanciful titles. In my defence, though, I do actually think that “unreasonable” is a reasonable way of describing the success of neural networks. The argument in the original paper was something like “it could have been the case that math just wasn’t that helpful in describing the universe, but actually it works really well on most things we try it on, and we don’t have any principled explanation for why that is so”. Similarly, it could have been the case that feedforward neural networks just weren’t very good at learning useful functions, but actually they work really well on most things we try them on, and we don’t have any principled explanation for why that is so.
I confess that I have a weakness for slightly fanciful titles. In my defence, though, I do actually think that “unreasonable” is a reasonable way of describing the success of neural networks. The argument in the original paper was something like “it could have been the case that math just wasn’t that helpful in describing the universe, but actually it works really well on most things we try it on, and we don’t have any principled explanation for why that is so”. Similarly, it could have been the case that feedforward neural networks just weren’t very good at learning useful functions, but actually they work really well on most things we try them on, and we don’t have any principled explanation for why that is so.