Even when contrarians win, they lose: Jeff Hawkins

Related: Even When Contrarians Win, They Lose

I had long thought that Jeff Hawkins (and the Redwood Center, and Numentia) were pursuing an idea that didn’t work, and were continuing to fail to give up for a prolonged period of time. I formed this belief because I had not heard of any impressive results or endorsements of their research. However, I recently read an interview with Andrew Ng, a leading machine learning researcher, in which he credits Jeff Hawkins with publicizing the “one learning algorithm” hypothesis—the idea that most of the cognitive work of the brain is done by one algorithm. Ng says that, as a young researcher, this pushed him into areas that could lead to general AI. He still believes that AGI is far though.

I found out about Hawkins’ influence on Ng after reading an old SL4 post by Eliezer and looking for further information about Jeff Hawkins. It seems that the “one learning algorithm” hypothesis was widely known in neuroscience, but not within AI until Hawkins’ work. Based on Eliezer’s citation of Mountcastle and his known familiarity with cognitive science, it seems that he learned of this hypothesis independently of Hawkins. The “one learning algorithm” hypothesis is important in the context of intelligence explosion forecasting, since hard takeoff is vastly more likely if it is true. I have been told that further evidence for this hypothesis has been found recently, but I don’t know the details.

This all fits well with Robin Hanson’s model. Hawkins had good evidence that better machine learning should be possible, but the particular approaches that he took didn’t perform as well as less biologically-inspired ones, so he’s not really recognized today. Deep learning would definitely have happened without him; there were already many people working in the field, and they started to attract attention because of improved performance due to a few tricks and better hardware. At least Ng’s career though can be credited to Hawkins.

I’ve been thinking about Robin’s hypothesis a lot recently, since many researchers in AI are starting to think about the impacts of their work (most still only think about the near-term societal impacts rather than thinking about superintelligence though). They recognize that this shift towards thinking about societal impacts is recent, but they have no idea why it is occurring. They know that many people, such as Elon Musk, have been outspoken about AI safety in the media recently, but few have heard of Superintelligence, or attribute the recent change to FHI or MIRI.