Training run size has grown much faster than the world’s total supply of AI compute. If these near-frontier experiments were truly a bottleneck on progress, AI algorithmic progress would have slowed down over the past 10 years.
I think this history is consistent with near-frontier experiments being important, and labs continuing to do a large number of such experiments as part of the process of increasing lab spending on training compute.
ie: suppose OAI now spends $100m/model instead of $1m/model. There’s no reason that they couldn’t still be spending, say, 50% of their training compute on running 500 0.1%-scale experiments.
Caveat: This is at the firm level; you could argue that fewer near-frontier experiments are being done in total across the AI ecosystem, and certainly there’s less information flow between organizations conducting these experiments.
I think this history is consistent with near-frontier experiments being important, and labs continuing to do a large number of such experiments as part of the process of increasing lab spending on training compute.
ie: suppose OAI now spends $100m/model instead of $1m/model. There’s no reason that they couldn’t still be spending, say, 50% of their training compute on running 500 0.1%-scale experiments.
Caveat: This is at the firm level; you could argue that fewer near-frontier experiments are being done in total across the AI ecosystem, and certainly there’s less information flow between organizations conducting these experiments.
My sense is that labs have scaled up frontier training runs faster than they’ve scaled up their supply of compute.
I.e. in 2015 the biggest training runs would have been <<10% of OAI’s compute, but that’s no longer true today.
Not confident in this!