I think I’ve yet to see a paper that convincingly supports the claim that neural nets are learning natural representations of the world. For some papers that refute this claim, see e.g.
On the other hand since they tend to be task-specific learners they might take shortcuts that we wouldn’t perceive as “natural”; our “natural object” ontology is optimized for much more general task than most NNets.
If I’m correct about this I would expect NNets to become more “natural” as the tasks get closer to being “AI-complete”, such as question-answering systems and scene description networks.
My impression that they can in fact learn “natural” representations of the world, a good example here http://arxiv.org/abs/1311.2901
On the other hand since they tend to be task-specific learners they might take shortcuts that we wouldn’t perceive as “natural”; our “natural object” ontology is optimized for much more general task than most NNets.
If I’m correct about this I would expect NNets to become more “natural” as the tasks get closer to being “AI-complete”, such as question-answering systems and scene description networks.