To answer that question, I have provided some of the relevant arguments in my past writing, but at this point given the enormous success of DL (which I predicted well in advance) towards AGI and the great extent to which it has reverse engineered the brain, combined with the fact that moore’s law shrinkage is petering out and the brain remains above the efficiency of our best accelerators, entirely shifts the burden on to you to write up detailed analysis/arguments as to how you can explain these facts.
I think there’s just not that much to explain, here—to me, human-level cognition just doesn’t seem that complicated or impressive in an absolute sense—it is performed by a 10W computer designed by a blind idiot god, after all.
The fact that current DL paradigm methods inspired by its functionality have so far failed to produce artificial cognition of truly comparable quality and efficiency seems more like a failure of those methods rather than a success, at least so far. I don’t expect this trend to continue in the near term (which I think we agree on), and grant you some bayes points for predicting it further in advance.
I think there’s just not that much to explain, here—to me, human-level cognition just doesn’t seem that complicated or impressive in an absolute sense—it is performed by a 10W computer designed by a blind idiot god, after all.
The fact that current DL paradigm methods inspired by its functionality have so far failed to produce artificial cognition of truly comparable quality and efficiency seems more like a failure of those methods rather than a success, at least so far. I don’t expect this trend to continue in the near term (which I think we agree on), and grant you some bayes points for predicting it further in advance.