Deep limitations? Examining expert disagreement over deep learning

Link post

A recent publication by Carla Zoe Cremer, who’s working at the Future of Humanity Institute:

We conducted 25 expert interviews resulting in the identification of 40 limitations of the deep learning approach and 5 origins of expert disagreement. These origins are open scientific questions that partially explain different interpretations by experts and thereby elucidate central issues in AI research. They are: abstraction, generalisation, explanatory models, emergence of planning and intervention. We explore both optimistic and pessimistic arguments that are related to each of the fve key questions. We explore common beliefs that underpin optimistic and pessimistic argumentation. Our data provide a basis upon which to construct a research agenda that addresses key deep learning limitations.

  1. Abstraction: Do current artificial neural networks (ANNs) form abstract representations effectively?

  2. Generalisation: Should ANNs’ ability to generalise inspire optimism about deep learning?

  3. Explanatory, causal models: Is it necessary, possible and feasible to construct compressed, causal, explanatory models of the environment as described in Lake et al. (2017) using deep learning?

  4. Emergence of planning: Will sufficiently complex environments be sufficient in enabling deep learning algorithms to develop the capacity for hierarchical, long-term reasoning and planning?

  5. Intervention: Will deep learning support and require learning by intervening in a complex, real environment?

Personally, I’m fairly confident that the first three won’t be major problems for deep learning. I’m much more uncertain about the fourth and fifth, since they correspond to types of data that seem quite difficult to obtain. (I’m happy to agree with the fourth in principle, but in practice the “sufficient complexity” might be well beyond the sorts of training environments AI researchers currently think about.)