especially as compute will only keep scaling until ~2030, and then the amount of fuel for exploring algorithmic ideas won’t keep growing as rapidly
Technical flag that compute scaling will slow down to the historical Moore’s law trend plus historical fab buildout times, it won’t completely stop, which means it’ll go down from 3.5x per year to 1.55x per year, but yes this does take wind out of the sails of algorithmic progress (though it’s helpful to note that even post-LLM scaling, we’ll be able to simulate human brains passably by the late 2030s, speeding up progress to AGI).
Technical flag that compute scaling will slow down to the historical Moore’s law trend plus historical fab buildout times, it won’t completely stop, which means it’ll go down from 3.5x per year to 1.55x per year, but yes this does take wind out of the sails of algorithmic progress (though it’s helpful to note that even post-LLM scaling, we’ll be able to simulate human brains passably by the late 2030s, speeding up progress to AGI).