You might expect the labor force of NormalCorp to be roughly in equilibrium where they gain equally from spending more on compute as they gain from spending on salaries (to get more/better employees).
[...]
However, I’m quite skeptical of this type of consideration making a big difference because the ML industry has already varied the compute input massively, with over 7 OOMs of compute difference between research now (in 2025) vs at the time of AlexNet 12 years ago, (invalidating the view that there is some relatively narrow range of inputs in which neither input is bottlenecking) and AI companies effectively can’t pay more to get faster or much better employees, so we’re not at a particularly privileged point in human AI R&D capabilities.
SlowCorp has 625K H100s per researcher. What do you even do with that much compute if you drop it into this world? Is every researcher just sweeping hyperparameters on the biggest pretraining runs? I’d normally say “scale up pretraining another factor of 100” and then expect that SlowCorp could plausibly outperform NormalCorp, except you’ve limited them to 1 week and a similar amount of total compute, so they don’t even have that option (and in fact they can’t even run normal pretraining runs, since those take longer than 1 week to complete).
The quality and amount of labor isn’t the primary problem here. The problem is that the current practices for AI development are specialized to the current labor:compute ratio, and can’t just be changed on a dime if you drastically change the ratio. Sure, the compute input has varied massively over 7 OOMs; importantly this did not happen all at once, the ecosystem adapted to it.
SlowCorp would be in a much better position if it was in a world where AI development had evolved with these kinds of bottlenecks existing all along. Frontier pretraining runs would be massively more parallel, and would complete in a day. There would be dramatically more investment in automation of hyperparameter sweeps and scaling analyses, rather than depending on human labor to do that. The inference-time compute paradigm would have started 1-2 years earlier, and would be significantly more mature. How fast would AI progress be in that world if you are SlowCorp? I agree it would still be slower than current AI progress, but it is really hard to guess how much slower, and it’s definitely drastically faster than if you just impute a SlowCorp in today’s world (which mostly seems like it will flounder and die immediately).
So we can break down the impacts into two categories:
SlowCorp is slower because of less access to resources. This is the opposite for AutomatedCorp, so you’d expect it to be correspondingly faster.
SlowCorp is slower because AI development is specialized to the current labor:compute ratio. This is not the opposite for AutomatedCorp, if anything it will also slow down AutomatedCorp (but in practice it probably doesn’t affect AutomatedCorp since there is so much serial labor for AutomatedCorp to fix the issue).
If you want to pump your intuition for what AutomatedCorp should be capable of, the relevant SlowCorp is the one that only faces the first problem, that is, you want to consider the SlowCorp that evolved in a world with those constraints in place all along, not the SlowCorp thrown into a research ecosystem not designed for the constraints it faces. Personally, once I try to imagine that I just run into a wall of “who even knows what that world looks like” and fail to have my intuition pumped.
SlowCorp has 625K H100s per researcher. What do you even do with that much compute if you drop it into this world? Is every researcher just sweeping hyperparameters on the biggest pretraining runs? I’d normally say “scale up pretraining another factor of 100” and then expect that SlowCorp could plausibly outperform NormalCorp, except you’ve limited them to 1 week and a similar amount of total compute, so they don’t even have that option (and in fact they can’t even run normal pretraining runs, since those take longer than 1 week to complete).
The quality and amount of labor isn’t the primary problem here. The problem is that the current practices for AI development are specialized to the current labor:compute ratio, and can’t just be changed on a dime if you drastically change the ratio. Sure, the compute input has varied massively over 7 OOMs; importantly this did not happen all at once, the ecosystem adapted to it.
SlowCorp would be in a much better position if it was in a world where AI development had evolved with these kinds of bottlenecks existing all along. Frontier pretraining runs would be massively more parallel, and would complete in a day. There would be dramatically more investment in automation of hyperparameter sweeps and scaling analyses, rather than depending on human labor to do that. The inference-time compute paradigm would have started 1-2 years earlier, and would be significantly more mature. How fast would AI progress be in that world if you are SlowCorp? I agree it would still be slower than current AI progress, but it is really hard to guess how much slower, and it’s definitely drastically faster than if you just impute a SlowCorp in today’s world (which mostly seems like it will flounder and die immediately).
So we can break down the impacts into two categories:
SlowCorp is slower because of less access to resources. This is the opposite for AutomatedCorp, so you’d expect it to be correspondingly faster.
SlowCorp is slower because AI development is specialized to the current labor:compute ratio. This is not the opposite for AutomatedCorp, if anything it will also slow down AutomatedCorp (but in practice it probably doesn’t affect AutomatedCorp since there is so much serial labor for AutomatedCorp to fix the issue).
If you want to pump your intuition for what AutomatedCorp should be capable of, the relevant SlowCorp is the one that only faces the first problem, that is, you want to consider the SlowCorp that evolved in a world with those constraints in place all along, not the SlowCorp thrown into a research ecosystem not designed for the constraints it faces. Personally, once I try to imagine that I just run into a wall of “who even knows what that world looks like” and fail to have my intuition pumped.