The passage seems fine to me; I commented on Erdil’s post and other brain efficiency discussions at the time, and I still think that power consumption is a more objective way of comparing performance characteristics of the brain vs. silicon, and that various kinds of FLOP/s comparisons favored by critics of the clock speed argument in the IAB passage are much more fraught ([1], [2]).
It’s true that clock speed (and neuron firing speed) aren’t straightforwardly / directly translatable to “speed of thought”, but both of them are direct proxies for energy consumption and power density. And a very rough BOTEC shows that ~10,000x is a reasonable estimate for the difference in power density between the brain and silicon.
Essentially, the brain is massively underclocked because of design-space restrictions imposed by biology and evolution, whereas silicon-based processing has been running up against fundamental physical limits on component size, clock speed, and power density for a while now. So once AIs can run whatever cognitive algorithms that the brain implements (or algorithms that match the brain in terms of high-level quality of the actual thoughts) at any speed, the already-existing power density difference implies they’ll immediately have a much higher performance ceiling in terms of the throughput and latency that they can run those algorithms at. It’s not a coincidence that making this argument via clock speeds leads to basically the same conclusion as making the same argument via power density.
Essentially, the brain is massively underclocked because of design-space restrictions imposed by biology and evolution
The main restriction is power efficiency: the brain provides a great deal of intelligence for a budget of only ~20 watts. Spreading out that power budget over a very wide memory operating at very slow speed just turns out to be the most power efficient design (vs a very small memory running at very high speed), because memory > time.
The passage seems fine to me; I commented on Erdil’s post and other brain efficiency discussions at the time, and I still think that power consumption is a more objective way of comparing performance characteristics of the brain vs. silicon, and that various kinds of FLOP/s comparisons favored by critics of the clock speed argument in the IAB passage are much more fraught ([1], [2]).
It’s true that clock speed (and neuron firing speed) aren’t straightforwardly / directly translatable to “speed of thought”, but both of them are direct proxies for energy consumption and power density. And a very rough BOTEC shows that ~10,000x is a reasonable estimate for the difference in power density between the brain and silicon.
Essentially, the brain is massively underclocked because of design-space restrictions imposed by biology and evolution, whereas silicon-based processing has been running up against fundamental physical limits on component size, clock speed, and power density for a while now. So once AIs can run whatever cognitive algorithms that the brain implements (or algorithms that match the brain in terms of high-level quality of the actual thoughts) at any speed, the already-existing power density difference implies they’ll immediately have a much higher performance ceiling in terms of the throughput and latency that they can run those algorithms at. It’s not a coincidence that making this argument via clock speeds leads to basically the same conclusion as making the same argument via power density.
This is the answer, but
The main restriction is power efficiency: the brain provides a great deal of intelligence for a budget of only ~20 watts. Spreading out that power budget over a very wide memory operating at very slow speed just turns out to be the most power efficient design (vs a very small memory running at very high speed), because memory > time.