The model says experiment compute becomes the binding constraint once coding is fast. But are frontier labs actually compute-bottlenecked on experiments right now? Anthropic runs inference for millions of users while training models. With revenue growing, more investment coming in, and datacenters being built, couldn’t they allocate eg. 2x more to research compute this year if they wanted?
2. Research taste improvement rate
The model estimates AI research taste improvement based on how quickly AIs have improved in a variety of metrics.
But researchers at a given taste level can now run many more experiments because Claude Code removes the coding bottleneck.
More experiment output means faster feedback, which in turn means faster taste development. So the rate at which human researchers develop taste should itself be accelerating. Does your model capture this? Or does it assume taste improvement is only a function of effective compute, not of experiment throughput?
3. Low-value code
Ryan’s argument (from his October post) is that AI makes it cheap to generate code, so people generate more low-level code they wouldn’t have otherwise written.
But here’s my question: if the marginal code being written is “low-value” in the sense of “wouldn’t have been worth a human’s time before,” isn’t that still a real productivity gain, if say researchers can now run a bunch of claude code agents instances to run experiments instead of having to interface with a bunch of engineers?
4. What AIs Can’t Do
The model treats research taste as qualitatively different from coding ability. But what exactly is the hard thing AIs can’t do? If it’s “generating novel ideas across disciplines” or “coming up with new architectures”, these seem like capabilities that scale with knowledge and reasoning, both improving. IIRC there’s some anecdotal evidence of novel discoveries of an LLM solving an Erdős problem, and someone from the Scott Aaronson sphere discussing AI contributions to something like quantum physics problems? Not sure.
If it’s “making codebases more efficient”, AIs already beat humans at competitive programming. I’ve seen some posts on LW discussing how they timed theirselves vs an AI against something that the AI should be able to do, and they beat the AI. But intuitively it does seem to me that models are getting better at the general “optimizing codebases” thing, even if it’s not quite best-human-level yet.
5. Empirical basis for β (diminishing returns)
The shift from AI 2027 to the new model seems to come partly from “taking into account diminishing returns”, aka the Jones model assumption that ideas get harder to find. What data did you use to estimate β? And given we’re now in a regime with AI-assisted research, why should historical rates of diminishing returns apply going forward?
Some questions I have:
1. Compute bottleneck
The model says experiment compute becomes the binding constraint once coding is fast. But are frontier labs actually compute-bottlenecked on experiments right now? Anthropic runs inference for millions of users while training models. With revenue growing, more investment coming in, and datacenters being built, couldn’t they allocate eg. 2x more to research compute this year if they wanted?
2. Research taste improvement rate
The model estimates AI research taste improvement based on how quickly AIs have improved in a variety of metrics.
But researchers at a given taste level can now run many more experiments because Claude Code removes the coding bottleneck.
More experiment output means faster feedback, which in turn means faster taste development. So the rate at which human researchers develop taste should itself be accelerating. Does your model capture this? Or does it assume taste improvement is only a function of effective compute, not of experiment throughput?
3. Low-value code
Ryan’s argument (from his October post) is that AI makes it cheap to generate code, so people generate more low-level code they wouldn’t have otherwise written.
But here’s my question: if the marginal code being written is “low-value” in the sense of “wouldn’t have been worth a human’s time before,” isn’t that still a real productivity gain, if say researchers can now run a bunch of claude code agents instances to run experiments instead of having to interface with a bunch of engineers?
4. What AIs Can’t Do
The model treats research taste as qualitatively different from coding ability. But what exactly is the hard thing AIs can’t do? If it’s “generating novel ideas across disciplines” or “coming up with new architectures”, these seem like capabilities that scale with knowledge and reasoning, both improving. IIRC there’s some anecdotal evidence of novel discoveries of an LLM solving an Erdős problem, and someone from the Scott Aaronson sphere discussing AI contributions to something like quantum physics problems? Not sure.
If it’s “making codebases more efficient”, AIs already beat humans at competitive programming. I’ve seen some posts on LW discussing how they timed theirselves vs an AI against something that the AI should be able to do, and they beat the AI. But intuitively it does seem to me that models are getting better at the general “optimizing codebases” thing, even if it’s not quite best-human-level yet.
5. Empirical basis for β (diminishing returns)
The shift from AI 2027 to the new model seems to come partly from “taking into account diminishing returns”, aka the Jones model assumption that ideas get harder to find. What data did you use to estimate β? And given we’re now in a regime with AI-assisted research, why should historical rates of diminishing returns apply going forward?