[Question] Are Speed Superintelligences Feasible for Modern ML Techniques?


I am very ignorant about machine learning.


I’ve frequently heard suggestions that a superintelligence could dominate humans by thinking a thousand or million times faster than a human. Is this actually a feasible outcome for prosaic ML systems?

Why I Doubt Speed Superintelligence

One reason I think this might not be the case is that the “superpower” of speed superintelligences is faster serial thought. However, I’m under the impression that we’re already running into fundamental limits to the serial processing speed and can’t really make them go much faster:

In 2002, an Intel Pentium 4 model was introduced as the first CPU with a clock rate of 3 GHz (three billion cycles per second corresponding to ~ 0.33 nanoseconds per cycle). Since then, the clock rate of production processors has increased much more slowly, with performance improvements coming from other design changes.

Set in 2011, the Guinness World Record for the highest CPU clock rate is 8.42938 GHz with an overclocked AMD FX-8150 Bulldozer-based chip in an LHe/​LN2 cryobath, 5 GHz on air.[4][5] This is surpassed by the CPU-Z overclocking record for the highest CPU clock rate at 8.79433 GHz with an AMD FX-8350 Piledriver-based chip bathed in LN2, achieved in November 2012.[6][7] It is also surpassed by the slightly slower AMD FX-8370 overclocked to 8.72 GHz which tops of the HWBOT frequency rankings.[8][9]

The highest base clock rate on a production processor is the IBM zEC12, clocked at 5.5 GHz, which was released in August 2012.

Of course the “clock rate” of the human brain is much slower, but it’s not like ML models are ever going to run on processors with significantly faster clock rates. Even in 2062, we probably will not have any production processors with > 50 GHz base clock rate (it may well be considerably slower). Rising compute availability for ML will continue to be driven by parallel processing techniques.

GPT-30 would not have considerably faster serial processing than GPT-3. And I’m under the impression that “thinking speed” is mostly a function of serial processing speed?


The above said, my questions:

  1. Can we actually speed up the “thinking” of fully trained ML models by K times during inference if we run it on processors that are K times faster?

  2. How does thinking/​inference speed scale with compute?

    1. Faster serial processors

    2. More parallel processors