The reason why the human brain can get away with such a low “clock speed” is because intelligence is an embarrassingly parallel problem. Realtime constraints and the clock speed of a chip puts a limit on how deep the stack of neural net layers can be, but no limit on how wide the neural net can be, and according to deep learning theory, a wide net is complete for all problems.
We also haven’t seen yet how big an impact neuromorphic architectures could be. It could be several orders of magnitude. Add in the ability of multiple intelligent units to work together just like humans do (but with less in-fighting) and it’s hard to say just how much effective collective intelligence they could express.
Thanks for the reply. Do you have any position or intuitions on question 1 or 2?
Does more inference compute speed up inference time?
Can we actually speed up the “thinking” of fully trained ML models by K times during inference if we run it on processors that are K times faster?
a. Yes b. Yes
This is all with the caveat that doing things faster doesn’t mean that it can solve bigger, more difficult problems or that it’s solutions will be of a higher quality.