Have fun with generative models such as variational Bayesian neural networks, generative adversarial networks, applications of Fokker–Planck/Langevin/Hamiltonian dynamics to ML and NNs in particular, and so on. There are certainly lots of open problems for the mathematically inclined which are much more interesting than “Look ma, my neural networks made psychedelic artwork and C-looking code with more or less matched parentheses”.
For instance, this paper provides pointers to some of these methods and describes a class of failure modes that are still difficult to address.
Yeah, I suppose our instincts agree, because I’ve already studied all these things except the last two :-)
Have fun with generative models such as variational Bayesian neural networks, generative adversarial networks, applications of Fokker–Planck/Langevin/Hamiltonian dynamics to ML and NNs in particular, and so on. There are certainly lots of open problems for the mathematically inclined which are much more interesting than “Look ma, my neural networks made psychedelic artwork and C-looking code with more or less matched parentheses”.
For instance, this paper provides pointers to some of these methods and describes a class of failure modes that are still difficult to address.