It would be great to see an analysis of this from a complexity-theoretic / cryptographic perspective. Are there distributions that can’t be imitated correctly in this way, even when they should be within the power of your model? Are there distributions where you get potentially problematic behavior like in the steganography case?
(That’s also surely of interest to the mainstream ML community given the recent prominence of variational autoencoders, so it seems quite likely someone has done it.)
After that there is a more subtle question about learnability, but it would be good to start with the easy part.
It would be great to see an analysis of this from a complexity-theoretic / cryptographic perspective. Are there distributions that can’t be imitated correctly in this way, even when they should be within the power of your model? Are there distributions where you get potentially problematic behavior like in the steganography case?
(That’s also surely of interest to the mainstream ML community given the recent prominence of variational autoencoders, so it seems quite likely someone has done it.)
After that there is a more subtle question about learnability, but it would be good to start with the easy part.