Sadly, in my experience, looking at the representational capacity of neural networks quickly runs into very annoying technical problems. For example, for a fixed dimension, a finite size network can fit arbitrary continuous functions to arbitrary accuracy. The construction is pathological (in particular, the network weights become impractically large), but it shows why it’s hard to prove limitations in the representational capacity of neural networks.
You could limited the network parameters to have finite precision, but that makes it extremely hard to reason formally. Numerical experiments could still yield interesting results though.
Personally, I’d put my money on research into what neural networks can learn (rather than what they can represent). We’re still in early stages, but things like the leap complexity seem promising to me.
Sadly, in my experience, looking at the representational capacity of neural networks quickly runs into very annoying technical problems. For example, for a fixed dimension, a finite size network can fit arbitrary continuous functions to arbitrary accuracy. The construction is pathological (in particular, the network weights become impractically large), but it shows why it’s hard to prove limitations in the representational capacity of neural networks.
You could limited the network parameters to have finite precision, but that makes it extremely hard to reason formally. Numerical experiments could still yield interesting results though.
Personally, I’d put my money on research into what neural networks can learn (rather than what they can represent). We’re still in early stages, but things like the leap complexity seem promising to me.