That doesn’t change the fact that the pioneers really only pursued neural networks because of their similarity to the actual structure of the brain, not by first-principles reasoning about how high dimensionality and gradient descent scale well with data, size, and compute (I understand this is a high bar but this is part of why I don’t think there are any real “experts”). And in their early career especially, they were all mired in the neurological paradigm for thinking about neural networks.
Hinton, who got close to breaking free from this way of thinking when he published the backprop paper, ends it by saying “it is worth looking for more biologically plausible ways of doing gradient descent.” In fact, his 2022 forward-forward algorithm shows his approach is still tied to biological plausibility. A 2023 interview with the University of Toronto, he mentions that the reason he got concerned about superintelligence was that when working on the FF algorithm, he realized that backpropagation was just going to be better than any optimization algorithm inspired by the brain.
That doesn’t change the fact that the pioneers really only pursued neural networks because of their similarity to the actual structure of the brain, not by first-principles reasoning about how high dimensionality and gradient descent scale well with data, size, and compute (I understand this is a high bar but this is part of why I don’t think there are any real “experts”). And in their early career especially, they were all mired in the neurological paradigm for thinking about neural networks.
Hinton, who got close to breaking free from this way of thinking when he published the backprop paper, ends it by saying “it is worth looking for more biologically plausible ways of doing gradient descent.” In fact, his 2022 forward-forward algorithm shows his approach is still tied to biological plausibility. A 2023 interview with the University of Toronto, he mentions that the reason he got concerned about superintelligence was that when working on the FF algorithm, he realized that backpropagation was just going to be better than any optimization algorithm inspired by the brain.