I put roughly 50% probability on feasibility of software-only singularity.[1]
(I’m probably going to be reinventing a bunch of the compute-centric takeoff model in slightly different ways below, but I think it’s faster to partially reinvent than to dig up the material, and I probably do use a slightly different approach.)
My argument here will be a bit sloppy and might contain some errors. Sorry about this. I might be more careful in the future.
The key question for software-only singularity is: “When the rate of labor production is doubled (as in, as if your employees ran 2x faster[2]), does that more than double or less than double the rate of algorithmic progress? That is, algorithmic progress as measured by how fast we increase the labor production per FLOP/s (as in, the labor production from AI labor on a fixed compute base).”. This is a very economics-style way of analyzing the situation, and I think this is a pretty reasonable first guess. Here’s a diagram I’ve stolen from Tom’s presentation on explosive growth illustrating this:
Basically, every time you double the AI labor supply, does the time until the next doubling (driven by algorithmic progress) increase (fizzle) or decrease (foom)? I’m being a bit sloppy in saying “AI labor supply”. We care about a notion of parallelism-adjusted labor (faster laborers are better than more laborers) and quality increases can also matter. I’ll make the relevant notion more precise below.
I’m about to go into a relatively complicated argument for why I think the historical data supports software-only singularity. If you want more basic questions answered (such as “Doesn’t retraining make this too slow?”), consider looking at Tom’s presentation on takeoff speeds.
Here’s a diagram that you might find useful in understanding the inputs into AI progress:
And here is the relevant historical context in terms of trends:
Historically, algorithmic progress in LLMs looks like 3-4x per year including improvements on all parts of the stack.[3] This notion of algorithmic progress is “reduction in compute needed to reach a given level of frontier performance”, which isn’t equivalent to increases in the rate of labor production on a fixed compute base. I’ll talk more about this below.
This has been accompanied by increases of around 4x more hardware per year[4] and maybe 2x more quality-adjusted (parallel) labor working on LLM capabilities per year. I think total employees working on LLM capabilities have been roughly 3x-ing per year (in recent years), but quality has been decreasing over time.
A 2x increase in the quality-adjusted parallel labor force isn’t as good as the company getting the same sorts of labor tasks done 2x faster (as in, the resulting productivity from having your employees run 2x faster) due to parallelism tax (putting aside compute bottlenecks for now). I’ll apply the same R&D parallelization penalty as used in Tom’s takeoff model and adjust this down by a power of 0.7 to yield 20.7= 1.6x per year in increased labor production rate. (So, it’s as though the company keeps the same employees, but those employees operate 1.6x faster each year.)
It looks like the fraction of progress driven by algorithmic progress has been getting larger over time.
So, overall, we’re getting 3-4x algorithmic improvement per year being driven by 1.6x more labor per year and 4x more hardware.
So, the key question is how much of this algorithmic improvement is being driven by labor vs. by hardware. If it is basically all hardware, then the returns to labor must be relatively weak and software-only singularity seems unlikely. If it is basically all labor, then we’re seeing 3-4x algorithmic improvement per year for 1.6x more labor per year, which means the returns to labor look quite good (at least historically). Based on some guesses and some poll questions, my sense is that capabilities researchers would operate about 2.5x slower if they had 10x less compute (after adaptation), so the production function is probably proportional to compute0.4⋅labor0.6 (0.4=log10(2.5)). (This is assuming a cobb-douglas production function.) Edit: see the derivation of the relevant thing in Deep’s comment, my old thing was wrong[5].
Now, let’s talk more about the transfer from algorithmic improvement to the rate of labor production. A 2x algorithmic improvement in LLMs makes it so that you can reach the same (frontier) level of performance for 2x less training compute, but we actually care about a somewhat different notion for software-only singularity: how much you can increase the production rate of labor (the thing that we said was increasing at roughly a rate of 1.6x per year by using more human employees). My current guess is that every 2x algorithmic improvement in LLMs increases the rate of labor production by 21.1, and I’m reasonably confident that the exponent isn’t much below 1.0. I don’t currently have a very principled estimation strategy for this, and it’s somewhat complex to reason about. I discuss this in the appendix below.
So, if this exponent is around 1, our central estimate of 2.3 from above corresponds to software-only singularity and our estimate of 1.56 from above under more pessimistic assumptions corresponds to this not being feasible. Overall, my sense is that the best guess numbers lean toward software-only singularity.
More precisely, software-only singularity that goes for >500x effective compute gains above trend (to the extent this metric makes sense, this is roughly >5 years of algorithmic progress). Note that you can have software-only singularity be feasible while buying tons more hardware at the same time. And if this ends up expanding compute production by >10x using AI labor, then this would count as massive compute production despite also having a feasible software-only singularity. (However, in most worlds, I expect software-only singularity to be fast enough, if feasible, that we don’t see this.)
Rather than denominating labor in accelerating employees, we could instead denominate in number of parallel employees. This would work equivalently (we can always convert into equivalents to the extent these things can funge), but because we can actually accelerate employees and the serial vs. parallel distinction is important, I think it is useful to denominate in accelerating employees.
I would have previously cited 3x, but recent progress looks substantially faster (with DeepSeek v3 and reasoning models seemingly indicating somewhat faster than 3x progress IMO), so I’ve revised to 3-4x.
This includes both increased spending and improved chips. Here, I’m taking my best guess at increases in hardware usage for training and transferring this to research compute usage on the assumption that training compute and research compute have historically been proportional.
Edit: the reasoning I did here was off. Here was the old text: so the production function is probably roughly α⋅compute0.4⋅labor0.6 (0.4=log10(2.5)). Increasing compute by 4x and labor by 1.6x increases algorithmic improvement by 3-4x, let’s say 3.5x, so we have 3.5=α⋅40.4⋅1.60.6, α=3.540.4⋅1.60.6=1.52. Thus, doubling labor would increase algorithmic improvement by 1.52⋅20.6=2.3. This is very sensitive to the exact numbers; if we instead used 3x slower instead of 2.5x slower, we would have gotten that doubling labor increases algorithmic improvement by 1.56, which is substantially lower. Obviously, all the exact numbers here are highly uncertain.
Hey Ryan! Thanks for writing this up—I think this whole topic is important and interesting.
I was confused about how your analysis related to the Epoch paper, so I spent a while with Claude analyzing it. I did a re-analysis that finds similar results, but also finds (I think) some flaws in your rough estimate. (Keep in mind I’m not an expert myself, and I haven’t closely read the Epoch paper, so I might well be making conceptual errors. I think the math is right though!)
I’ll walk through my understanding of this stuff first, then compare to your post. I’ll be going a little slowly (A) to help myself refresh myself via referencing this later, (B) to make it easy to call out mistakes, and (C) to hopefully make this legible to others who want to follow along.
Using Ryan’s empirical estimates in the Epoch model
The Epoch model
The Epoch paper models growth with the following equation: 1. d(lnA)dt∼A−βEλ,
where A = efficiency and E = research input. We want to consider worlds with a potential software takeoff, meaning that increases in AI efficiency directly feed into research input, which we model as d(lnA)dt∼A−βAλ=Aλ−β. So the key consideration seems to be the ratio λβ. If it’s 1, we get steady exponential growth from scaling inputs; greater, superexponential; smaller, subexponential.[1]
Fitting the model How can we learn about this ratio from historical data?
Let’s pretend history has been convenient and we’ve seen steady exponential growth in both variables, so A=A0ert and E=E0eqt. Then d(lnA)dthas been constant over time, so by equation 1, A(t)−βE(t)λ has been constant as well. Substituting in for A and E, we find that A0e−βrtE0eλqt is constant over time, which is only possible if βr=λq and the exponent is always zero. Thus if we’ve seen steady exponential growth, the historical value of our key ratio is:
2. λβ=rq.
Intuitively, if we’ve seen steady exponential growth while research input has increased more slowly than research output (AI efficiency), there are superlinear returns to scaling inputs.
Introducing the Cobb-Douglas function
But wait! E, research input, is an abstraction that we can’t directly measure. Really there’s both compute and labor inputs. Those have indeed been growing roughly exponentially, but at different rates.
Intuitively, it makes sense to say that “effective research input” has grown as some kind of weighted average of the rate of compute and labor input growth. This is my take on why a Cobb-Douglas function of form (3) E∼CpL1−p, with a weight parameter 0<p<1, is useful here: it’s a weighted geometric average of the two inputs, so its growth rate is a weighted average of their growth rates.
Writing that out: in general, say both inputs have grown exponentially, so C(t)=C0eqct and L(t)=L0eqlt. Then E has grown as E(t)=E0eqt=E0epqct+(1−p)qlt, so q is the weighted average (4) q=pqc+(1−p)ql of the growth rates of labor and capital.
Then, using Equation 2, we can estimate our key ratio λβ as rq=rpqc+(1−p)ql.
Let’s get empirical!
Plugging in your estimates:
Historical compute scaling of 4x/year gives qc=ln(4);
Historical labor scaling of 1.6x gives ql=ln(1.6);
Historical compute elasticity on research outputs of 0.4 gives p=0.4;
But wait: we’re not done yet! Under our Cobb-Douglas assumption, scaling labor by a factor of 2 isn’t as good as scaling all research inputs by a factor of 2; it’s only 20.6/2 as good.
Plugging in Equation 3 (which describes research input E in terms of compute and labor) to Equation 1 (which estimates AI progress A based on research), our adjusted form of the Epoch model is d(lnA)dt∼A−βEλ∼A−β∗Cpλ∗L(1−p)λ.
Under a software-only singularity, we hold compute constant while scaling labor with AI efficiency, so d(lnA)dt∼A(t)−β∗L(t)(1−p)λ multiplied by a fixed compute term. Since labor scales as A, we have d(lnA)dt=A−βtAλ(1−p)t=Aλ(1−p)t−βt. By the same analysis as in our first section, we can see A grows exponentially if λ(1−p)β=1, and grows grows superexponentially if this ratio is >1. So our key ratio λβ just gets multiplied by 1−p, and it wasn’t a waste to find it, phew!
Now we get the true form of our equation:we get a software-only foom iffλβ(1−p)>1, or (via equation 2) iff we see empirically that rq(1−p)>1. Call this the takeoff ratio: it corresponds to a) how much AI progress scales with inputs and b) how much of a penalty we take for not scaling compute.
Result: Above, we got λβ=1.5, so our takeoff ratio is 0.6∗1.5=.9. That’s quite close! If we think it’s more reasonable to think of a historical growth rate of 4 instead of 3.5, we’d increase our takeoff ratio by a factor of ln(4)ln(3.5)=1.1, to a ratio of .99, right on the knife edge of FOOM. [4][note: I previously had the wrong numbers here: I had lambda/beta = 1.6, which would mean the 4x/year case has a takeoff ratio of 1.05, putting it into FOOM land]
So this isn’t too far off from your results in terms of implications, but it is somewhat different (no FOOM for 3.5x, less sensitivity to the exact historical growth rate).
Analyzing your approach:
Tweaking alpha:
Your estimate of α is in fact similar in form to my ratio—rqbut what you’re calculating instead is α=er/eq=3.5/(40.4∗1.60.6).
One indicator that something’s wrong is that your result involves checking whether α∗21−p>2, or equivalently whether ln(α)+(1−p)ln(2)>ln(2), or equivalently whether ln(α)>p∗ln(2). But the choice of 2 is arbitrary—conceptually, you just want to check if scaling software by a factor n increases outputs by a factor n or more. Yet ln(α)−p∗ln(n) clearly varies with n.
One way of parsing the problem is that alpha is (implicitly) time dependent—it is equal to exp(r * 1 year) / exp(q * 1 year), a ratio of progress vs inputs in the time period of a year. If you calculated alpha based on a different amount of time, you’d get a different value. By contrast, r/q is a ratio of rates, so it stays the same regardless of what timeframe you use to measure it.[5]
Maybe I’m confused about what your Cobb-Douglas function is meant to be calculating—is it E within an Epoch-style takeoff model, or something else?
Nuances:
Does Cobb-Douglas make sense?
The geometric average of rates thing makes sense, but it feels weird that that simple intuitive approach leads to a functional form (Cobb-Douglas) that also has other implications.
Wikipedia says Cobb-Douglas functions can have the exponents not add to 1 (while both being between 0 and 1). Maybe this makes sense here? Not an expert.
How seriously should we take all this?
This whole thing relies on...
Assuming smooth historical trends
Assuming those trends continue in the future
And those trends themselves are based on functional fits to rough / unclear data.
It feels like this sort of thing is better than nothing, but I wish we had something better.
I really like the various nuances you’re adjusting for, like parallel vs serial scaling, and especially distinguishing algorithmic improvement from labor efficiency. [6] Thinking those things through makes this stuff feel less insubstantial and approximate...though the error bars still feel quite large.
Actually there’s a complexity here, which is that scaling labor alone may be less efficient than scaling “research inputs” which include both labor and compute. We’ll come to this in a few paragraphs.
I originally had 1.6 here, but as Ryan points out in a reply it’s actually 1.5. I’ve tried to reconstruct what I could have put into a calculator to get 1.6 instead, and I’m at a loss!
I was curious how aggressive the superexponential growth curve would be with a takeoff ratio of a mere 0.96∗1.1=1.056. A couple of Claude queries gave me different answers (maybe because the growth is so extreme that different solvers give meaningfully different approximations?), but they agreed that growth is fairly slow in the first year (~5x) and then hits infinity by the end of the second year. I wrote this comment with the wrong numbers (0.96 instead of 0.9), so it doesn’t accurately represent what you get if you plug in 4x capability growth per year. Still cool to get a sense of what these curves look like, though.
I think can be understood in terms of the alpha-being-implicitly-a-timescale-function thing—if you compare an alpha value with the ratio of growth you’re likely to see during the same time period, e.g. alpha(1 year) and n = one doubling, you probably get reasonable-looking results.
I find it annoying that people conflate “increased efficiency of doing known tasks” with “increased ability to do new useful tasks”. It seems to me that these could be importantly different, although it’s hard to even settle on a reasonable formalization of the latter. Some reasons this might be okay:
There’s a fuzzy conceptual boundary between the two: if GPT-n can do the task at 0.01% success rate, does that count as a “known task?” what about if it can do each of 10 components at 0.01% success, so in practice we’ll never see it succeed if run without human guidance, but we know it’s technically possible?
Under a software singularity situation, maybe the working hypothesis is that the model can do everything necessary to improve itself a bunch, maybe just not very efficiently yet. So we only need efficiency growth, not to increase the task set. That seems like a stronger assumption than most make, but maybe a reasonable weaker assumption is that the model will ‘unlock’ the necessary new tasks over time, after which point they become subject to rapid efficiency growth.
And empirically, we have in fact seen rapid unlocking of new capabilities, so it’s not crazy to approximate “being able to do new things” as a minor but manageable slowdown to the process of AI replacing human AI R&D labor.
I think you are correct with respect to my estimate of α and the associated model I was using. Sorry about my error here. I think I was fundamentally confusing a few things in my head when writing out the comment.
I think your refactoring of my strategy is correct and I tried to check it myself, though I don’t feel confident in verifying it is correct.
Your estimate doesn’t account for the conversion between algorithmic improvement and labor efficiency, but it is easy to add this in by just changing the historical algorithmic efficiency improvement of 3.5x/year to instead be the adjusted effective labor efficiency rate and then solving identically. I was previously thinking the relationship was that labor efficiency was around the same as algorithmic efficiency, but I now think this is more likely to be around algo_efficiency2 based on Tom’s comment.
Neat, thanks a ton for the algorithmic-vs-labor update—I appreciated that you’d distinguished those in your post, but I forgot to carry that through in mine! :)
And oops, I really don’t know how I got to 1.6 instead of 1.5 there. Thanks for the flag, have updated my comment accordingly!
The square relationship idea is interesting—that factor of 2 is a huge deal. Would be neat to see a Guesstimate or Squiggle version of this calculation that tries to account for the various nuances Tom mentions, and has error bars on each of the terms, so we both get a distribution of r and a sensitivity analysis. (Maybe @Tom Davidson already has this somewhere? If not I might try to make a crappy version myself, or poke talented folks I know to do a good version :)
It feels like this sort of thing is better than nothing, but I wish we had something better.
The existing epoch paper is pretty good, but doesn’t directly target LLMs in a way which seems somewhat sad.
The thing I’d be most excited about is:
Epoch does an in depth investigation using an estimation methodology which is directly targeting LLMs (rather than looking at returns in some other domains).
They use public data and solicit data from companies about algorithmic improvement, head count, compute on experiments etc.
(Some) companies provide this data. Epoch potentially doesn’t publish this exact data and instead just publishes the results of the final analysis to reduce capabilities externalities. (IMO, companies are somewhat unlikely to do this, but I’d like to be proven wrong!)
(I’m going through this and understanding where I made an error with my approach to α. I think I did make an error, but I’m trying to make sure I’m not still confused. Edit: I’ve figured this out, see my other comment.)
Wikipedia says Cobb-Douglas functions can have the exponents not add to 1 (while both being between 0 and 1). Maybe this makes sense here? Not an expert.
It shouldn’t matter in this case because we’re raising the whole value of E to λ.
Once AI has automated AI R&D, will software progress become faster or slower over time? This depends on the extent to which software improvements get harder to find as software improves – the steepness of the diminishing returns.
We can ask the following crucial empirical question:
When (cumulative) cognitive research inputs double, how many times does software double?
If the answer is “< 1”, then software progress will slow down over time. If the answer is “1”, software progress will remain at the same exponential rate. If the answer is “>1”, software progress will speed up over time.
The bolded question can be studied empirically, by looking at how many times software has doubled each time the human researcher population has doubled.
(What does it mean for “software” to double? A simple way of thinking about this is that software doubles when you can run twice as many copies of your AI with the same compute. But software improvements don’t just improve runtime efficiency: they also improve capabilities. To incorporate these improvements, we’ll ultimately need to make some speculative assumptions about how to translate capability improvements into an equivalently-useful runtime efficiency improvement..)
The best quality data on this question is Epoch’s analysis of computer vision training efficiency. They estimate r = ~1.4: every time the researcher population doubled, training efficiency doubled 1.4 times. (Epoch’s preliminary analysis indicates that the r value for LLMs would likely be somewhat higher.) We can use this as a starting point, and then make various adjustments:
Upwards for improving capabilities. Improving training efficiency improves capabilities, as you can train a model with more “effective compute”. To quantify this effect, imagine we use a 2X training efficiency gain to train a model with twice as much “effective compute”. How many times would that double “software”? (I.e., how many doublings of runtime efficiency would have the same effect?) There are various sources of evidence on how much capabilities improve every time training efficiency doubles: toy ML experiments suggest the answer is ~1.7; human productivity studies suggest the answer is ~2.5. I put more weight on the former, so I’ll estimate 2. This doubles my median estimate to r = ~2.8 (= 1.4 * 2).
Upwards for post-training enhancements. So far, we’ve only considered pre-training improvements. But post-training enhancements like fine-tuning, scaffolding, and prompting also improve capabilities (o1 was developed using such techniques!). It’s hard to say how large an increase we’ll get from post-training enhancements. These can allow faster thinking, which could be a big factor. But there might also be strong diminishing returns to post-training enhancements holding base models fixed. I’ll estimate a 1-2X increase, and adjust my median estimate to r = ~4 (2.8*1.45=4).
Downwards for less growth in compute for experiments. Today, rising compute means we can run increasing numbers of GPT-3-sized experiments each year. This helps drive software progress. But compute won’t be growing in our scenario. That might mean that returns to additional cognitive labour diminish more steeply. On the other hand, the most important experiments are ones that use similar amounts of compute to training a SOTA model. Rising compute hasn’t actually increased the number of these experiments we can run, as rising compute increases the training compute for SOTA models. And in any case, this doesn’t affect post-training enhancements. But this still reduces my median estimate down to r = ~3. (See Eth (forthcoming) for more discussion.)
Downwards for fixed scale of hardware. In recent years, the scale of hardware available to researchers has increased massively. Researchers could invent new algorithms that only work at the new hardware scales for which no one had previously tried to to develop algorithms. Researchers may have been plucking low-hanging fruit for each new scale of hardware. But in the software intelligence explosions I’m considering, this won’t be possible because the hardware scale will be fixed. OAI estimate ImageNet efficiency via a method that accounts for this (by focussing on a fixed capability level), and find a 16-month doubling time, as compared with Epoch’s 9-month doubling time. This reduces my estimate down to r = ~1.7 (3 * 9⁄16).
Downwards for diminishing returns becoming steeper over time. In most fields, returns diminish more steeply than in software R&D. So perhaps software will tend to become more like the average field over time. To estimate the size of this effect, we can take our estimate that software is ~10 OOMs from physical limits (discussed below), and assume that for each OOM increase in software, r falls by a constant amount, reaching zero once physical limits are reached. If r = 1.7, then this implies that r reduces by 0.17 for each OOM. Epoch estimates that pre-training algorithmic improvements are growing by an OOM every ~2 years, which would imply a reduction in r of 1.02 (6*0.17) by 2030. But when we include post-training enhancements, the decrease will be smaller (as [reason], perhaps ~0.5. This reduces my median estimate to r = ~1.2 (1.7-0.5).
Overall, my median estimate of r is 1.2. I use a log-uniform distribution with the bounds 3X higher and lower (0.4 to 3.6).
My sense is that I start with a higher r value due to the LLM case looking faster (and not feeling the need to adjust downward in a few places like you do in the LLM case). Obviously the numbers in the LLM case are much less certain given that I’m guessing based on qualitative improvement and looking at some open source models, but being closer to what we actually care about maybe overwhelms this.
I also think I’d get a slightly lower update on the diminishing returns case due to thinking it has a good chance of having substantially sharper dimishing returns as you get closer and closer rather than having linearly decreasing r (based on some first principles reasoning and my understanding of how returns diminished in the semi-conductor case).
But the biggest delta is that I think I wasn’t pricing in the importance of increasing capabilities. (Which seems especially important if you apply a large R&D parallelization penalty.)
Obviously the numbers in the LLM case are much less certain given that I’m guessing based on qualitative improvement and looking at some open source models,
Sorry,I don’t follow why they’re less certain?
based on some first principles reasoning and my understanding of how returns diminished in the semi-conductor case
I’d be interested to hear more about this. The semi conductor case is hard as we don’t know how far we are from limits, but if we use Landauer’s limit then I’d guess you’re right. There’s also uncertainty about how much alg progress we will and have met
I’m just eyeballing the rate of algorithmic progress while in the computer vision case, we can at least look at benchmarks and know the cost of training compute for various models.
My sense is that you have generalization issues in the compute vision case while in the frontier LLM case you have issues with knowing the actual numbers (in terms of number of employees and cost of training runs). I’m also just not carefully doing the accounting.
I’d be interested to hear more about this.
I don’t have much to say here sadly, but I do think investigating this could be useful.
Really appreciate you covering all these nuances, thanks Tom!
Can you give a pointer to the studies you mentioned here?
There are various sources of evidence on how much capabilities improve every time training efficiency doubles: toy ML experiments suggest the answer is ~1.7; human productivity studies suggest the answer is ~2.5. I put more weight on the former, so I’ll estimate 2. This doubles my median estimate to r = ~2.8 (= 1.4 * 2).
Here’s a simple argument I’d be keen to get your thoughts on: On the Possibility of a Tastularity
Research taste is the collection of skills including experiment ideation, literature review, experiment analysis, etc. that collectively determine how much you learn per experiment on average (perhaps alongside another factor accounting for inherent problem difficulty / domain difficulty, of course, and diminishing returns)
Human researchers seem to vary quite a bit in research taste—specifically, the difference between 90th percentile professional human researchers and the very best seems like maybe an order of magnitude? Depends on the field, etc. And the tails are heavy; there is no sign of the distribution bumping up against any limits.
Yet the causes of these differences are minor! Take the very best human researchers compared to the 90th percentile. They’ll have almost the same brain size, almost the same amount of experience, almost the same genes, etc. in the grand scale of things.
This means we should assume that if the human population were massively bigger, e.g. trillions of times bigger, there would be humans whose brains don’t look that different from the brains of the best researchers on Earth, and yet who are an OOM or more above the best Earthly scientists in research taste. -- AND it suggests that in the space of possible mind-designs, there should be minds which are e.g. within 3 OOMs of those brains in every dimension of interest, and which are significantly better still in the dimension of research taste. (How much better? Really hard to say. But it would be surprising if it was only, say, 1 OOM better, because that would imply that human brains are running up against the inherent limits of research taste within a 3-OOM mind design space, despite human evolution having only explored a tiny subspace of that space, and despite the human distribution showing no signs of bumping up against any inherent limits)
OK, so what? So, it seems like there’s plenty of room to improve research taste beyond human level. And research taste translates pretty directly into overall R&D speed, because it’s about how much experimentation you need to do to achieve a given amount of progress. With enough research taste, you don’t need to do experiments at all—or rather, you look at the experiments that have already been done, and you infer from them all you need to know to build the next design or whatever.
Anyhow, tying this back to your framework: What if the diminishing returns / increasing problem difficulty / etc. dynamics are such that, if you start from a top-human-expert-level automated researcher, and then do additional AI research to double its research taste, and then do additional AI research to double its research taste again, etc. the second doubling happens in less time than it took to get to the first doubling? Then you get a singularity in research taste (until these conditions change of course) -- the Tastularity.
How likely is the Tastularity? Well, again one piece of evidence here is the absurdly tiny differences between humans that translate to huge differences in research taste, and the heavy-tailed distribution. This suggests that we are far from any inherent limits on research taste even for brains roughly the shape and size and architecture of humans, and presumably the limits for a more relaxed (e.g. 3 OOM radius in dimensions like size, experience, architecture) space in mind-design are even farther away. It similarly suggests that there should be lots of hill-climbing that can be done to iteratively improve research taste.
How does this relate to software-singularity? Well, research taste is just one component of algorithmic progress; there is also speed, # of parallel copies & how well they coordinate, and maybe various other skills besides such as coding ability. So even if the Tastularity isn’t possible, improvements in taste will stack with improvements in those other areas, and the sum might cross the critical threshold.
In my framework, this is basically an argument that algorithmic-improvement-juice can be translated into a large improvement in AI R&D labor production via the mechanism of greatly increasing the productivity per “token” (or unit of thinking compute or whatever). See my breakdown here where I try to convert from historical algorithmic improvement to making AIs better at producing AI R&D research.
Your argument is basically that this taste mechanism might have higher returns than reducing cost to run more copies.
I agree this sort of argument means that returns to algorithmic improvement on AI R&D labor production might be bigger than you would otherwise think. This is both because this mechanism might be more promising than other mechanisms and even if it is somewhat less promising, diverse approaches make returns dimish less aggressively. (In my model, this means that best guess conversion might be more like algo_improvement1.3 rather than algo_improvement1.0.)
I think it might be somewhat tricky to train AIs to have very good research taste, but this doesn’t seem that hard via training them on various prediction objectives.
At a more basic level, I expect that training AIs to predict the results of experiments and then running experiments based on value of information as estimated partially based on these predictions (and skipping experiments with certain results and more generally using these predictions to figure out what to do) seems pretty promising. It’s really hard to train humans to predict the results of tens of thousands of experiments (both small and large), but this is relatively clean outcomes based feedback for AIs.
I don’t really have a strong inside view on how much the “AI R&D research taste” mechanism increases the returns to algorithmic progress.
I’ll paste my own estimate for this param in a different reply.
But here are the places I most differ from you:
Bigger adjustment for ‘smarter AI’. You’ve argue in your appendix that, only including ‘more efficient’ and ‘faster’ AI, you think the software-only singularity goes through. I think including ‘smarter’ AI makes a big difference. This evidence suggests that doubling training FLOP doubles output-per-FLOP 1-2 times. In addition, algorithmic improvements will improve runtime efficiency. So overall I think a doubling of algorithms yields ~two doublings of (parallel) cognitive labour.
--> software singularity more likely
Lower lambda. I’d now use more like lambda = 0.4 as my median. There’s really not much evidence pinning this down; I think Tamay Besiroglu thinks there’s some evidence for values as low as 0.2. This will decrease the observed historical increase in human workers more than it decreases the gains from algorithmic progress (bc of speed improvements)
--> software singularity slightly more likely
Complications thinking about compute which might be a wash.
Number of useful-experiments has increased by less than 4X/year. You say compute inputs have been increasing at 4X. But simultaneously the scale of experiments ppl must run to be near to the frontier has increased by a similar amount. So the number of near-frontier experiments has not increased at all.
This argument would be right if the ‘usefulness’ of an experiment depends solely on how much compute it uses compared to training a frontier model. I.e. experiment_usefulness = log(experiment_compute / frontier_model_training_compute). The 4X/year increases the numerator and denominator of the expression, so there’s no change in usefulness-weighted experiments.
That might be false. GPT-2-sized experiments might in some ways be equally useful even as frontier model size increases. Maybe a better expression would be experiment_usefulness = alpha * log(experiment_compute / frontier_model_training_compute) + beta * log(experiment_compute). In this case, the number of usefulness-weighted experiments has increased due to the second term.
--> software singularity slightly more likely
Steeper diminishing returns during software singularity. Recent algorithmic progress has grabbed low-hanging fruit from new hardware scales. During a software-only singularity that won’t be possible. You’ll have to keep finding new improvements on the same hardware scale. Returns might diminish more quickly as a result.
--> software singularity slightly less likely
Compute share might increase as it becomes scarce. You estimate a share of 0.4 for compute, which seems reasonable. But it might fall over time as compute becomes a bottleneck. As an intuition pump, if your workers could think 1e10 times faster, you’d be fully constrained on the margin by the need for more compute: more labour wouldn’t help at all but more compute could be fully utilised so the compute share would be ~1.
--> software singularity slightly less likely
--> overall these compute adjustments prob make me more pessimistic about the software singularity, compared to your assumptions
Taking it all together, i think you should put more probability on the software-only singluarity, mostly because of capability improvements being much more significant than you assume.
Yep, I think my estimates were too low based on these considerations and I’ve updated up accordingly. I updated down on your argument that maybe r decreases linearly as you approach optimal efficiency. (I think it probably doesn’t decrease linearly and instead drops faster towards the end based partially on thinking a bit about the dynamics and drawing on the example of what we’ve seen in semi-conductor improvement over time, but I’m not that confident.) Maybe I’m now at like 60% software-only is feasible given these arguments.
Lower lambda. I’d now use more like lambda = 0.4 as my median. There’s really not much evidence pinning this down; I think Tamay Besiroglu thinks there’s some evidence for values as low as 0.2.
Isn’t this really implausible? This implies that if you had 1000 researchers/engineers of average skill at OpenAI doing AI R&D, this would be as good as having one average skill researcher running at 16x (10000.4) speed. It does seem very slightly plausible that having someone as good as the best researcher/engineer at OpenAI run at 16x speed would be competitive with OpenAI, but that isn’t what this term is computing. 0.2 is even more crazy, implying that 1000 researchers/engineers is as good as one researcher/engineer running at 4x speed!
I think 0.4 is far on the lower end (maybe 15th percentile) for all the way down to one accelerated researcher, but seems pretty plausible at the margin.
As in, 0.4 suggests that 1000 researchers = 100 researchers at 2.5x speed which seems kinda reasonable while 1000 researchers = 1 researcher at 16x speed does seem kinda crazy / implausible.
So, I think my current median lambda at likely margins is like 0.55 or something and 0.4 is also pretty plausible at the margin.
Ok, I think what is going on here is maybe that the constant you’re discussing here is different from the constant I was discussing. I was trying to discuss the question of how much worse serial labor is than parallel labor, but I think the lambda you’re talking about takes into account compute bottlenecks and similar?
Taking it all together, i think you should put more probability on the software-only singluarity, mostly because of capability improvements being much more significant than you assume.
I’m confused — I thought you put significantly less probability on software-only singularity than Ryan does? (Like half?) Maybe you were using a different bound for the number of OOMs of improvement?
Sorry, for my comments on this post I’ve been referring to “software only singularity?” only as “will the parameter r >1 when we f first fully automate AI RnD”, not as a threshold for some number of OOMs. That’s what Ryan’s analysis seemed to be referring to.
I separately think that even if initially r>1 the software explosion might not go on for that long
I think Tom’s take is that he expects I will put more probability on software only singularity after updating on these considerations. It seems hard to isolate where Tom and I disagree based on this comment, but maybe it is on how much to weigh various considerations about compute being a key input.
Appendix: Estimating the relationship between algorithmic improvement and labor production
In particular, if we fix the architecture to use a token abstraction and consider training a new improved model: we care about how much cheaper you make generating tokens at a given level of performance (in inference tok/flop), how much serially faster you make generating tokens at a given level of performance (in serial speed: tok/s at a fixed level of tok/flop), and how much more performance you can get out of tokens (labor/tok, really per serial token). Then, for a given new model with reduced cost, increased speed, and increased production per token and assuming a parallelism penalty of 0.7, we can compute the increase in production as roughly: cost_reduction0.7⋅speed_increase(1−0.7)⋅productivity_multiplier[1] (I can show the math for this if there is interest).
My sense is that reducing inference compute needed for a fixed level of capability that you already have (using a fixed amount of training run) is usually somewhat easier than making frontier compute go further by some factor, though I don’t think it is easy to straightforwardly determine how much easier this is[2]. Let’s say there is a 1.25 exponent on reducing cost (as in, 2x algorithmic efficiency improvement is as hard as a 21.25=2.38 reduction in cost)? (I’m also generally pretty confused about what the exponent should be. I think exponents from 0.5 to 2 seem plausible, though I’m pretty confused. 0.5 would correspond to the square root from just scaling data in scaling laws.) It seems substantially harder to increase speed than to reduce cost as speed is substantially constrained by serial depth, at least when naively applying transformers. Naively, reducing cost by β (which implies reducing parameters by β) will increase speed by somewhat more than β1/3 as depth is cubic in layers. I expect you can do somewhat better than this because reduced matrix sizes also increase speed (it isn’t just depth) and because you can introduce speed-specific improvements (that just improve speed and not cost). But this factor might be pretty small, so let’s stick with 13 for now and ignore speed-specific improvements. Now, let’s consider the case where we don’t have productivity multipliers (which is strictly more conservative). Then, we get that increase in labor production is:
So, these numbers ended up yielding an exact equivalence between frontier algorithmic improvement and effective labor production increases. (This is a coincidence, though I do think the exponent is close to 1.)
In practice, we’ll be able to get slightly better returns by spending some of our resources investing in speed-specific improvements and in improving productivity rather than in reducing cost. I don’t currently have a principled way to estimate this (though I expect something roughly principled can be found by looking at trading off inference compute and training compute), but maybe I think this improves the returns to around algo_improvement1.1. If the coefficient on reducing cost was much worse, we would invest more in improving productivity per token, which bounds the returns somewhat.
Appendix: Isn’t compute tiny and decreasing per researcher?
One relevant objection is: Ok, but is this really feasible? Wouldn’t this imply that each AI researcher has only a tiny amount of compute? After all, if you use 20% of compute for inference of AI research labor, then each AI only gets 4x more compute to run experiments than for inference on itself? And, as you do algorithmic improvement to reduce AI cost and run more AIs, you also reduce the compute per AI! First, it is worth noting that as we do algorithmic progress, both the cost of AI researcher inference and the cost of experiments on models of a given level of capability go down. Precisely, for any experiment that involves a fixed number of inference or gradient steps on a model which is some fixed effective compute multiplier below/above the performance of our AI laborers, cost is proportional to inference cost (so, as we improve our AI workforce, experiment cost drops proportionally). However, for experiments that involve training a model from scratch, I expect the reduction in experiment cost to be relatively smaller such that such experiments must become increasingly small relative to frontier scale. Overall, it might be important to mostly depend on approaches which allow for experiments that don’t require training runs from scratch or to adapt to increasingly smaller full experiment training runs. To the extent AIs are made smarter rather than more numerous, this isn’t a concern. Additionally, we only need so many orders of magnitude of growth. In principle, this consideration should be captured by the exponents in the compute vs. labor production function, but it is possible this production function has very different characteristics in the extremes. Overall, I do think this concern is somewhat important, but I don’t think it is a dealbreaker for a substantial number of OOMs of growth.
Appendix: Can’t algorithmic efficiency only get so high?
My sense is that this isn’t very close to being a blocker. Here is a quick bullet point argument (from some slides I made) that takeover-capable AI is possible on current hardware.
Human brain is perhaps ~1e14 FLOP/s
With that efficiency, each H100 can run 10 humans (current cost $2 / hour)
10s of millions of human-level AIs with just current hardware production
Human brain is probably very suboptimal:
AIs already much better at many subtasks
Possible to do much more training than within lifetime training with parallelism
Biological issues: locality, noise, focused on sensory processing, memory limits
Smarter AI could be more efficient (smarter humans use less FLOP per task)
AI could be 1e2-1e7 more efficient on tasks like coding, engineering
This is just approximate because you can also trade off speed with cost in complicated ways and research new ways to more efficiently trade off speed and cost. I’ll be ignoring this for now.
It’s hard to determine because inference cost reductions have been driven by spending more compute on making smaller models e.g., training a smaller model for longer rather than just being driven by algorithmic improvement, and I don’t have great numbers on the difference off the top of my head.
In practice, we’ll be able to get slightly better returns by spending some of our resources investing in speed-specific improvements and in improving productivity rather than in reducing cost. I don’t currently have a principled way to estimate this (though I expect something roughly principled can be found by looking at trading off inference compute and training compute), but maybe I think this improves the returns to around algo_improvement1.1.
When considering an “efficiency only singularity”, some different estimates gets him r~=1; r~=1.5; r~=1.6. (Where r is defined so that “for each x% increase in cumulative R&D inputs, the output metric will increase by r*x”. The condition for increasing returns is r>1.)
I said I was 50-50 on an efficiency only singularity happening, at least temporarily. Based on these additional considerations I’m now at more like ~85% on a software only singularity. And I’d guess that initially r = ~3 (though I still think values as low as 0.5 or as high as 6 as plausible). There seem to be many strong ~independent reasons to think capability improvements would be a really huge deal compared to pure efficiency problems, and this is borne out by toy models of the dynamic.
Though note that later in the appendix he adjusts down from 85% to 65% due to some further considerations. Also, last I heard, Tom was more like 25% on software singularity. (ETA: Or maybe not? See other comments in this thread.)
Based on some guesses and some poll questions, my sense is that capabilities researchers would operate about 2.5x slower if they had 10x less compute (after adaptation)
Can you say roughly who the people surveyed were? (And if this was their raw guess or if you’ve modified it.)
I saw some polls from Daniel previously where I wasn’t sold that they were surveying people working on the most important capability improvements, so wondering if these are better.
Also, somewhat minor, but: I’m slightly concerned that surveys will overweight areas where labor is more useful relative to compute (because those areas should have disproportionately many humans working on them) and therefore be somewhat biased in the direction of labor being important.
I put roughly 50% probability on feasibility of software-only singularity.[1]
(I’m probably going to be reinventing a bunch of the compute-centric takeoff model in slightly different ways below, but I think it’s faster to partially reinvent than to dig up the material, and I probably do use a slightly different approach.)
My argument here will be a bit sloppy and might contain some errors. Sorry about this. I might be more careful in the future.
The key question for software-only singularity is: “When the rate of labor production is doubled (as in, as if your employees ran 2x faster[2]), does that more than double or less than double the rate of algorithmic progress? That is, algorithmic progress as measured by how fast we increase the labor production per FLOP/s (as in, the labor production from AI labor on a fixed compute base).”. This is a very economics-style way of analyzing the situation, and I think this is a pretty reasonable first guess. Here’s a diagram I’ve stolen from Tom’s presentation on explosive growth illustrating this:
Basically, every time you double the AI labor supply, does the time until the next doubling (driven by algorithmic progress) increase (fizzle) or decrease (foom)? I’m being a bit sloppy in saying “AI labor supply”. We care about a notion of parallelism-adjusted labor (faster laborers are better than more laborers) and quality increases can also matter. I’ll make the relevant notion more precise below.
I’m about to go into a relatively complicated argument for why I think the historical data supports software-only singularity. If you want more basic questions answered (such as “Doesn’t retraining make this too slow?”), consider looking at Tom’s presentation on takeoff speeds.
Here’s a diagram that you might find useful in understanding the inputs into AI progress:
And here is the relevant historical context in terms of trends:
Historically, algorithmic progress in LLMs looks like 3-4x per year including improvements on all parts of the stack.[3] This notion of algorithmic progress is “reduction in compute needed to reach a given level of frontier performance”, which isn’t equivalent to increases in the rate of labor production on a fixed compute base. I’ll talk more about this below.
This has been accompanied by increases of around 4x more hardware per year[4] and maybe 2x more quality-adjusted (parallel) labor working on LLM capabilities per year. I think total employees working on LLM capabilities have been roughly 3x-ing per year (in recent years), but quality has been decreasing over time.
A 2x increase in the quality-adjusted parallel labor force isn’t as good as the company getting the same sorts of labor tasks done 2x faster (as in, the resulting productivity from having your employees run 2x faster) due to parallelism tax (putting aside compute bottlenecks for now). I’ll apply the same R&D parallelization penalty as used in Tom’s takeoff model and adjust this down by a power of 0.7 to yield 20.7= 1.6x per year in increased labor production rate. (So, it’s as though the company keeps the same employees, but those employees operate 1.6x faster each year.)
It looks like the fraction of progress driven by algorithmic progress has been getting larger over time.
So, overall, we’re getting 3-4x algorithmic improvement per year being driven by 1.6x more labor per year and 4x more hardware.
So, the key question is how much of this algorithmic improvement is being driven by labor vs. by hardware. If it is basically all hardware, then the returns to labor must be relatively weak and software-only singularity seems unlikely. If it is basically all labor, then we’re seeing 3-4x algorithmic improvement per year for 1.6x more labor per year, which means the returns to labor look quite good (at least historically). Based on some guesses and some poll questions, my sense is that capabilities researchers would operate about 2.5x slower if they had 10x less compute (after adaptation), so the production function is probably proportional to compute0.4⋅labor0.6 (0.4=log10(2.5)). (This is assuming a cobb-douglas production function.) Edit: see the derivation of the relevant thing in Deep’s comment, my old thing was wrong[5].
Now, let’s talk more about the transfer from algorithmic improvement to the rate of labor production. A 2x algorithmic improvement in LLMs makes it so that you can reach the same (frontier) level of performance for 2x less training compute, but we actually care about a somewhat different notion for software-only singularity: how much you can increase the production rate of labor (the thing that we said was increasing at roughly a rate of 1.6x per year by using more human employees). My current guess is that every 2x algorithmic improvement in LLMs increases the rate of labor production by 21.1, and I’m reasonably confident that the exponent isn’t much below 1.0. I don’t currently have a very principled estimation strategy for this, and it’s somewhat complex to reason about. I discuss this in the appendix below.
So, if this exponent is around 1, our central estimate of 2.3 from above corresponds to software-only singularity and our estimate of 1.56 from above under more pessimistic assumptions corresponds to this not being feasible. Overall, my sense is that the best guess numbers lean toward software-only singularity.
More precisely, software-only singularity that goes for >500x effective compute gains above trend (to the extent this metric makes sense, this is roughly >5 years of algorithmic progress). Note that you can have software-only singularity be feasible while buying tons more hardware at the same time. And if this ends up expanding compute production by >10x using AI labor, then this would count as massive compute production despite also having a feasible software-only singularity. (However, in most worlds, I expect software-only singularity to be fast enough, if feasible, that we don’t see this.)
Rather than denominating labor in accelerating employees, we could instead denominate in number of parallel employees. This would work equivalently (we can always convert into equivalents to the extent these things can funge), but because we can actually accelerate employees and the serial vs. parallel distinction is important, I think it is useful to denominate in accelerating employees.
I would have previously cited 3x, but recent progress looks substantially faster (with DeepSeek v3 and reasoning models seemingly indicating somewhat faster than 3x progress IMO), so I’ve revised to 3-4x.
This includes both increased spending and improved chips. Here, I’m taking my best guess at increases in hardware usage for training and transferring this to research compute usage on the assumption that training compute and research compute have historically been proportional.
Edit: the reasoning I did here was off. Here was the old text: so the production function is probably roughly α⋅compute0.4⋅labor0.6 (0.4=log10(2.5)). Increasing compute by 4x and labor by 1.6x increases algorithmic improvement by 3-4x, let’s say 3.5x, so we have 3.5=α⋅40.4⋅1.60.6, α=3.540.4⋅1.60.6=1.52. Thus, doubling labor would increase algorithmic improvement by 1.52⋅20.6=2.3. This is very sensitive to the exact numbers; if we instead used 3x slower instead of 2.5x slower, we would have gotten that doubling labor increases algorithmic improvement by 1.56, which is substantially lower. Obviously, all the exact numbers here are highly uncertain.
Hey Ryan! Thanks for writing this up—I think this whole topic is important and interesting.
I was confused about how your analysis related to the Epoch paper, so I spent a while with Claude analyzing it. I did a re-analysis that finds similar results, but also finds (I think) some flaws in your rough estimate. (Keep in mind I’m not an expert myself, and I haven’t closely read the Epoch paper, so I might well be making conceptual errors. I think the math is right though!)
I’ll walk through my understanding of this stuff first, then compare to your post. I’ll be going a little slowly (A) to help myself refresh myself via referencing this later, (B) to make it easy to call out mistakes, and (C) to hopefully make this legible to others who want to follow along.
Using Ryan’s empirical estimates in the Epoch model
The Epoch model
The Epoch paper models growth with the following equation:
1. d(lnA)dt∼A−βEλ,
where A = efficiency and E = research input. We want to consider worlds with a potential software takeoff, meaning that increases in AI efficiency directly feed into research input, which we model as d(lnA)dt∼A−βAλ=Aλ−β. So the key consideration seems to be the ratio λβ. If it’s 1, we get steady exponential growth from scaling inputs; greater, superexponential; smaller, subexponential.[1]
Fitting the model
How can we learn about this ratio from historical data?
Let’s pretend history has been convenient and we’ve seen steady exponential growth in both variables, so A=A0ert and E=E0eqt. Then d(lnA)dthas been constant over time, so by equation 1, A(t)−βE(t)λ has been constant as well. Substituting in for A and E, we find that A0e−βrtE0eλqt is constant over time, which is only possible if βr=λq and the exponent is always zero. Thus if we’ve seen steady exponential growth, the historical value of our key ratio is:
2. λβ=rq.
Intuitively, if we’ve seen steady exponential growth while research input has increased more slowly than research output (AI efficiency), there are superlinear returns to scaling inputs.
Introducing the Cobb-Douglas function
But wait! E, research input, is an abstraction that we can’t directly measure. Really there’s both compute and labor inputs. Those have indeed been growing roughly exponentially, but at different rates.
Intuitively, it makes sense to say that “effective research input” has grown as some kind of weighted average of the rate of compute and labor input growth. This is my take on why a Cobb-Douglas function of form (3) E∼CpL1−p, with a weight parameter 0<p<1, is useful here: it’s a weighted geometric average of the two inputs, so its growth rate is a weighted average of their growth rates.
Writing that out: in general, say both inputs have grown exponentially, so C(t)=C0eqct and L(t)=L0eqlt. Then E has grown as E(t)=E0eqt=E0epqct+(1−p)qlt, so q is the weighted average (4) q=pqc+(1−p)ql of the growth rates of labor and capital.
Then, using Equation 2, we can estimate our key ratio λβ as rq=rpqc+(1−p)ql.
Let’s get empirical!
Plugging in your estimates:
Historical compute scaling of 4x/year gives qc=ln(4);
Historical labor scaling of 1.6x gives ql=ln(1.6);
Historical compute elasticity on research outputs of 0.4 gives p=0.4;
Adding these together, q=0.79∼ln(2.3).[2]
Historical efficiency improvement of 3.5x/year gives r=ln(3.5).
So λβ=ln(3.5)ln(2.3)=1.5 [3]
Adjusting for labor-only scaling
But wait: we’re not done yet! Under our Cobb-Douglas assumption, scaling labor by a factor of 2 isn’t as good as scaling all research inputs by a factor of 2; it’s only 20.6/2 as good.
Plugging in Equation 3 (which describes research input E in terms of compute and labor) to Equation 1 (which estimates AI progress A based on research), our adjusted form of the Epoch model is d(lnA)dt∼A−βEλ∼A−β∗Cpλ∗L(1−p)λ.
Under a software-only singularity, we hold compute constant while scaling labor with AI efficiency, so d(lnA)dt∼A(t)−β∗L(t)(1−p)λ multiplied by a fixed compute term. Since labor scales as A, we have d(lnA)dt=A−βtAλ(1−p)t=Aλ(1−p)t−βt. By the same analysis as in our first section, we can see A grows exponentially if λ(1−p)β=1, and grows grows superexponentially if this ratio is >1. So our key ratio λβ just gets multiplied by 1−p, and it wasn’t a waste to find it, phew!
Now we get the true form of our equation: we get a software-only foom iff λβ(1−p)>1, or (via equation 2) iff we see empirically that rq(1−p)>1. Call this the takeoff ratio: it corresponds to a) how much AI progress scales with inputs and b) how much of a penalty we take for not scaling compute.
Result: Above, we got λβ=1.5, so our takeoff ratio is 0.6∗1.5=.9. That’s quite close! If we think it’s more reasonable to think of a historical growth rate of 4 instead of 3.5, we’d increase our takeoff ratio by a factor of ln(4)ln(3.5)=1.1, to a ratio of .99, right on the knife edge of FOOM. [4] [note: I previously had the wrong numbers here: I had lambda/beta = 1.6, which would mean the 4x/year case has a takeoff ratio of 1.05, putting it into FOOM land]
So this isn’t too far off from your results in terms of implications, but it is somewhat different (no FOOM for 3.5x, less sensitivity to the exact historical growth rate).
Analyzing your approach:
Tweaking alpha:
Your estimate of α is in fact similar in form to my ratio—rqbut what you’re calculating instead is α=er/eq=3.5/(40.4∗1.60.6).
One indicator that something’s wrong is that your result involves checking whether α∗21−p>2, or equivalently whether ln(α)+(1−p)ln(2)>ln(2), or equivalently whether ln(α)>p∗ln(2). But the choice of 2 is arbitrary—conceptually, you just want to check if scaling software by a factor n increases outputs by a factor n or more. Yet ln(α)−p∗ln(n) clearly varies with n.
One way of parsing the problem is that alpha is (implicitly) time dependent—it is equal to exp(r * 1 year) / exp(q * 1 year), a ratio of progress vs inputs in the time period of a year. If you calculated alpha based on a different amount of time, you’d get a different value. By contrast, r/q is a ratio of rates, so it stays the same regardless of what timeframe you use to measure it.[5]
Maybe I’m confused about what your Cobb-Douglas function is meant to be calculating—is it E within an Epoch-style takeoff model, or something else?
Nuances:
Does Cobb-Douglas make sense?
The geometric average of rates thing makes sense, but it feels weird that that simple intuitive approach leads to a functional form (Cobb-Douglas) that also has other implications.
Wikipedia says Cobb-Douglas functions can have the exponents not add to 1 (while both being between 0 and 1). Maybe this makes sense here? Not an expert.
How seriously should we take all this?
This whole thing relies on...
Assuming smooth historical trends
Assuming those trends continue in the future
And those trends themselves are based on functional fits to rough / unclear data.
It feels like this sort of thing is better than nothing, but I wish we had something better.
I really like the various nuances you’re adjusting for, like parallel vs serial scaling, and especially distinguishing algorithmic improvement from labor efficiency. [6] Thinking those things through makes this stuff feel less insubstantial and approximate...though the error bars still feel quite large.
Actually there’s a complexity here, which is that scaling labor alone may be less efficient than scaling “research inputs” which include both labor and compute. We’ll come to this in a few paragraphs.
This is only coincidentally similar to your figure of 2.3 :)
I originally had 1.6 here, but as Ryan points out in a reply it’s actually 1.5. I’ve tried to reconstruct what I could have put into a calculator to get 1.6 instead, and I’m at a loss!
I was curious how aggressive the superexponential growth curve would be with a takeoff ratio of a mere0.96∗1.1=1.056. A couple of Claude queries gave me different answers (maybe because the growth is so extreme that different solvers give meaningfully different approximations?), but they agreed that growth is fairly slow in the first year (~5x) and then hits infinity by the end of the second year.I wrote this comment with the wrong numbers (0.96 instead of 0.9), so it doesn’t accurately represent what you get if you plug in 4x capability growth per year. Still cool to get a sense of what these curves look like, though.I think can be understood in terms of the alpha-being-implicitly-a-timescale-function thing—if you compare an alpha value with the ratio of growth you’re likely to see during the same time period, e.g. alpha(1 year) and n = one doubling, you probably get reasonable-looking results.
I find it annoying that people conflate “increased efficiency of doing known tasks” with “increased ability to do new useful tasks”. It seems to me that these could be importantly different, although it’s hard to even settle on a reasonable formalization of the latter. Some reasons this might be okay:
There’s a fuzzy conceptual boundary between the two: if GPT-n can do the task at 0.01% success rate, does that count as a “known task?” what about if it can do each of 10 components at 0.01% success, so in practice we’ll never see it succeed if run without human guidance, but we know it’s technically possible?
Under a software singularity situation, maybe the working hypothesis is that the model can do everything necessary to improve itself a bunch, maybe just not very efficiently yet. So we only need efficiency growth, not to increase the task set. That seems like a stronger assumption than most make, but maybe a reasonable weaker assumption is that the model will ‘unlock’ the necessary new tasks over time, after which point they become subject to rapid efficiency growth.
And empirically, we have in fact seen rapid unlocking of new capabilities, so it’s not crazy to approximate “being able to do new things” as a minor but manageable slowdown to the process of AI replacing human AI R&D labor.
I think you are correct with respect to my estimate of α and the associated model I was using. Sorry about my error here. I think I was fundamentally confusing a few things in my head when writing out the comment.
I think your refactoring of my strategy is correct and I tried to check it myself, though I don’t feel confident in verifying it is correct.
Your estimate doesn’t account for the conversion between algorithmic improvement and labor efficiency, but it is easy to add this in by just changing the historical algorithmic efficiency improvement of 3.5x/year to instead be the adjusted effective labor efficiency rate and then solving identically. I was previously thinking the relationship was that labor efficiency was around the same as algorithmic efficiency, but I now think this is more likely to be around algo_efficiency2 based on Tom’s comment.
Plugging this is, we’d get:
λβ(1−p)=rq(1−p)=ln(3.52)0.4ln(4)+0.6ln(1.6)(1−0.4)=2ln(3.5)ln(2.3)(1−0.4)=2⋅1.5⋅0.6=1.8
(In your comment you said ln(3.5)ln(2.3)=1.6, but I think the arithmetic is a bit off here and the answer is closer to 1.5.)
Neat, thanks a ton for the algorithmic-vs-labor update—I appreciated that you’d distinguished those in your post, but I forgot to carry that through in mine! :)
And oops, I really don’t know how I got to 1.6 instead of 1.5 there. Thanks for the flag, have updated my comment accordingly!
The square relationship idea is interesting—that factor of 2 is a huge deal. Would be neat to see a Guesstimate or Squiggle version of this calculation that tries to account for the various nuances Tom mentions, and has error bars on each of the terms, so we both get a distribution of r and a sensitivity analysis. (Maybe @Tom Davidson already has this somewhere? If not I might try to make a crappy version myself, or poke talented folks I know to do a good version :)
The existing epoch paper is pretty good, but doesn’t directly target LLMs in a way which seems somewhat sad.
The thing I’d be most excited about is:
Epoch does an in depth investigation using an estimation methodology which is directly targeting LLMs (rather than looking at returns in some other domains).
They use public data and solicit data from companies about algorithmic improvement, head count, compute on experiments etc.
(Some) companies provide this data. Epoch potentially doesn’t publish this exact data and instead just publishes the results of the final analysis to reduce capabilities externalities. (IMO, companies are somewhat unlikely to do this, but I’d like to be proven wrong!)
(I’m going through this and understanding where I made an error with my approach to α. I think I did make an error, but I’m trying to make sure I’m not still confused. Edit: I’ve figured this out, see my other comment.)
It shouldn’t matter in this case because we’re raising the whole value of E to λ.
Here’s my own estimate for this parameter:
Once AI has automated AI R&D, will software progress become faster or slower over time? This depends on the extent to which software improvements get harder to find as software improves – the steepness of the diminishing returns.
We can ask the following crucial empirical question:
When (cumulative) cognitive research inputs double, how many times does software double?
(In growth models of a software intelligence explosion, the answer to this empirical question is a parameter called r.)
If the answer is “< 1”, then software progress will slow down over time. If the answer is “1”, software progress will remain at the same exponential rate. If the answer is “>1”, software progress will speed up over time.
The bolded question can be studied empirically, by looking at how many times software has doubled each time the human researcher population has doubled.
(What does it mean for “software” to double? A simple way of thinking about this is that software doubles when you can run twice as many copies of your AI with the same compute. But software improvements don’t just improve runtime efficiency: they also improve capabilities. To incorporate these improvements, we’ll ultimately need to make some speculative assumptions about how to translate capability improvements into an equivalently-useful runtime efficiency improvement..)
The best quality data on this question is Epoch’s analysis of computer vision training efficiency. They estimate r = ~1.4: every time the researcher population doubled, training efficiency doubled 1.4 times. (Epoch’s preliminary analysis indicates that the r value for LLMs would likely be somewhat higher.) We can use this as a starting point, and then make various adjustments:
Upwards for improving capabilities. Improving training efficiency improves capabilities, as you can train a model with more “effective compute”. To quantify this effect, imagine we use a 2X training efficiency gain to train a model with twice as much “effective compute”. How many times would that double “software”? (I.e., how many doublings of runtime efficiency would have the same effect?) There are various sources of evidence on how much capabilities improve every time training efficiency doubles: toy ML experiments suggest the answer is ~1.7; human productivity studies suggest the answer is ~2.5. I put more weight on the former, so I’ll estimate 2. This doubles my median estimate to r = ~2.8 (= 1.4 * 2).
Upwards for post-training enhancements. So far, we’ve only considered pre-training improvements. But post-training enhancements like fine-tuning, scaffolding, and prompting also improve capabilities (o1 was developed using such techniques!). It’s hard to say how large an increase we’ll get from post-training enhancements. These can allow faster thinking, which could be a big factor. But there might also be strong diminishing returns to post-training enhancements holding base models fixed. I’ll estimate a 1-2X increase, and adjust my median estimate to r = ~4 (2.8*1.45=4).
Downwards for less growth in compute for experiments. Today, rising compute means we can run increasing numbers of GPT-3-sized experiments each year. This helps drive software progress. But compute won’t be growing in our scenario. That might mean that returns to additional cognitive labour diminish more steeply. On the other hand, the most important experiments are ones that use similar amounts of compute to training a SOTA model. Rising compute hasn’t actually increased the number of these experiments we can run, as rising compute increases the training compute for SOTA models. And in any case, this doesn’t affect post-training enhancements. But this still reduces my median estimate down to r = ~3. (See Eth (forthcoming) for more discussion.)
Downwards for fixed scale of hardware. In recent years, the scale of hardware available to researchers has increased massively. Researchers could invent new algorithms that only work at the new hardware scales for which no one had previously tried to to develop algorithms. Researchers may have been plucking low-hanging fruit for each new scale of hardware. But in the software intelligence explosions I’m considering, this won’t be possible because the hardware scale will be fixed. OAI estimate ImageNet efficiency via a method that accounts for this (by focussing on a fixed capability level), and find a 16-month doubling time, as compared with Epoch’s 9-month doubling time. This reduces my estimate down to r = ~1.7 (3 * 9⁄16).
Downwards for diminishing returns becoming steeper over time. In most fields, returns diminish more steeply than in software R&D. So perhaps software will tend to become more like the average field over time. To estimate the size of this effect, we can take our estimate that software is ~10 OOMs from physical limits (discussed below), and assume that for each OOM increase in software, r falls by a constant amount, reaching zero once physical limits are reached. If r = 1.7, then this implies that r reduces by 0.17 for each OOM. Epoch estimates that pre-training algorithmic improvements are growing by an OOM every ~2 years, which would imply a reduction in r of 1.02 (6*0.17) by 2030. But when we include post-training enhancements, the decrease will be smaller (as [reason], perhaps ~0.5. This reduces my median estimate to r = ~1.2 (1.7-0.5).
Overall, my median estimate of r is 1.2. I use a log-uniform distribution with the bounds 3X higher and lower (0.4 to 3.6).
My sense is that I start with a higher r value due to the LLM case looking faster (and not feeling the need to adjust downward in a few places like you do in the LLM case). Obviously the numbers in the LLM case are much less certain given that I’m guessing based on qualitative improvement and looking at some open source models, but being closer to what we actually care about maybe overwhelms this.
I also think I’d get a slightly lower update on the diminishing returns case due to thinking it has a good chance of having substantially sharper dimishing returns as you get closer and closer rather than having linearly decreasing r (based on some first principles reasoning and my understanding of how returns diminished in the semi-conductor case).
But the biggest delta is that I think I wasn’t pricing in the importance of increasing capabilities. (Which seems especially important if you apply a large R&D parallelization penalty.)
Sorry,I don’t follow why they’re less certain?
I’d be interested to hear more about this. The semi conductor case is hard as we don’t know how far we are from limits, but if we use Landauer’s limit then I’d guess you’re right. There’s also uncertainty about how much alg progress we will and have met
I’m just eyeballing the rate of algorithmic progress while in the computer vision case, we can at least look at benchmarks and know the cost of training compute for various models.
My sense is that you have generalization issues in the compute vision case while in the frontier LLM case you have issues with knowing the actual numbers (in terms of number of employees and cost of training runs). I’m also just not carefully doing the accounting.
I don’t have much to say here sadly, but I do think investigating this could be useful.
Really appreciate you covering all these nuances, thanks Tom!
Can you give a pointer to the studies you mentioned here?
Sure! See here: https://docs.google.com/document/d/1DZy1qgSal2xwDRR0wOPBroYE_RDV1_2vvhwVz4dxCVc/edit?tab=t.0#bookmark=id.eqgufka8idwl
Here’s a simple argument I’d be keen to get your thoughts on:
On the Possibility of a Tastularity
Research taste is the collection of skills including experiment ideation, literature review, experiment analysis, etc. that collectively determine how much you learn per experiment on average (perhaps alongside another factor accounting for inherent problem difficulty / domain difficulty, of course, and diminishing returns)
Human researchers seem to vary quite a bit in research taste—specifically, the difference between 90th percentile professional human researchers and the very best seems like maybe an order of magnitude? Depends on the field, etc. And the tails are heavy; there is no sign of the distribution bumping up against any limits.
Yet the causes of these differences are minor! Take the very best human researchers compared to the 90th percentile. They’ll have almost the same brain size, almost the same amount of experience, almost the same genes, etc. in the grand scale of things.
This means we should assume that if the human population were massively bigger, e.g. trillions of times bigger, there would be humans whose brains don’t look that different from the brains of the best researchers on Earth, and yet who are an OOM or more above the best Earthly scientists in research taste. -- AND it suggests that in the space of possible mind-designs, there should be minds which are e.g. within 3 OOMs of those brains in every dimension of interest, and which are significantly better still in the dimension of research taste. (How much better? Really hard to say. But it would be surprising if it was only, say, 1 OOM better, because that would imply that human brains are running up against the inherent limits of research taste within a 3-OOM mind design space, despite human evolution having only explored a tiny subspace of that space, and despite the human distribution showing no signs of bumping up against any inherent limits)
OK, so what? So, it seems like there’s plenty of room to improve research taste beyond human level. And research taste translates pretty directly into overall R&D speed, because it’s about how much experimentation you need to do to achieve a given amount of progress. With enough research taste, you don’t need to do experiments at all—or rather, you look at the experiments that have already been done, and you infer from them all you need to know to build the next design or whatever.
Anyhow, tying this back to your framework: What if the diminishing returns / increasing problem difficulty / etc. dynamics are such that, if you start from a top-human-expert-level automated researcher, and then do additional AI research to double its research taste, and then do additional AI research to double its research taste again, etc. the second doubling happens in less time than it took to get to the first doubling? Then you get a singularity in research taste (until these conditions change of course) -- the Tastularity.
How likely is the Tastularity? Well, again one piece of evidence here is the absurdly tiny differences between humans that translate to huge differences in research taste, and the heavy-tailed distribution. This suggests that we are far from any inherent limits on research taste even for brains roughly the shape and size and architecture of humans, and presumably the limits for a more relaxed (e.g. 3 OOM radius in dimensions like size, experience, architecture) space in mind-design are even farther away. It similarly suggests that there should be lots of hill-climbing that can be done to iteratively improve research taste.
How does this relate to software-singularity? Well, research taste is just one component of algorithmic progress; there is also speed, # of parallel copies & how well they coordinate, and maybe various other skills besides such as coding ability. So even if the Tastularity isn’t possible, improvements in taste will stack with improvements in those other areas, and the sum might cross the critical threshold.
In my framework, this is basically an argument that algorithmic-improvement-juice can be translated into a large improvement in AI R&D labor production via the mechanism of greatly increasing the productivity per “token” (or unit of thinking compute or whatever). See my breakdown here where I try to convert from historical algorithmic improvement to making AIs better at producing AI R&D research.
Your argument is basically that this taste mechanism might have higher returns than reducing cost to run more copies.
I agree this sort of argument means that returns to algorithmic improvement on AI R&D labor production might be bigger than you would otherwise think. This is both because this mechanism might be more promising than other mechanisms and even if it is somewhat less promising, diverse approaches make returns dimish less aggressively. (In my model, this means that best guess conversion might be more like algo_improvement1.3 rather than algo_improvement1.0.)
I think it might be somewhat tricky to train AIs to have very good research taste, but this doesn’t seem that hard via training them on various prediction objectives.
At a more basic level, I expect that training AIs to predict the results of experiments and then running experiments based on value of information as estimated partially based on these predictions (and skipping experiments with certain results and more generally using these predictions to figure out what to do) seems pretty promising. It’s really hard to train humans to predict the results of tens of thousands of experiments (both small and large), but this is relatively clean outcomes based feedback for AIs.
I don’t really have a strong inside view on how much the “AI R&D research taste” mechanism increases the returns to algorithmic progress.
I’ll paste my own estimate for this param in a different reply.
But here are the places I most differ from you:
Bigger adjustment for ‘smarter AI’. You’ve argue in your appendix that, only including ‘more efficient’ and ‘faster’ AI, you think the software-only singularity goes through. I think including ‘smarter’ AI makes a big difference. This evidence suggests that doubling training FLOP doubles output-per-FLOP 1-2 times. In addition, algorithmic improvements will improve runtime efficiency. So overall I think a doubling of algorithms yields ~two doublings of (parallel) cognitive labour.
--> software singularity more likely
Lower lambda. I’d now use more like lambda = 0.4 as my median. There’s really not much evidence pinning this down; I think Tamay Besiroglu thinks there’s some evidence for values as low as 0.2. This will decrease the observed historical increase in human workers more than it decreases the gains from algorithmic progress (bc of speed improvements)
--> software singularity slightly more likely
Complications thinking about compute which might be a wash.
Number of useful-experiments has increased by less than 4X/year. You say compute inputs have been increasing at 4X. But simultaneously the scale of experiments ppl must run to be near to the frontier has increased by a similar amount. So the number of near-frontier experiments has not increased at all.
This argument would be right if the ‘usefulness’ of an experiment depends solely on how much compute it uses compared to training a frontier model. I.e. experiment_usefulness = log(experiment_compute / frontier_model_training_compute). The 4X/year increases the numerator and denominator of the expression, so there’s no change in usefulness-weighted experiments.
That might be false. GPT-2-sized experiments might in some ways be equally useful even as frontier model size increases. Maybe a better expression would be experiment_usefulness = alpha * log(experiment_compute / frontier_model_training_compute) + beta * log(experiment_compute). In this case, the number of usefulness-weighted experiments has increased due to the second term.
--> software singularity slightly more likely
Steeper diminishing returns during software singularity. Recent algorithmic progress has grabbed low-hanging fruit from new hardware scales. During a software-only singularity that won’t be possible. You’ll have to keep finding new improvements on the same hardware scale. Returns might diminish more quickly as a result.
--> software singularity slightly less likely
Compute share might increase as it becomes scarce. You estimate a share of 0.4 for compute, which seems reasonable. But it might fall over time as compute becomes a bottleneck. As an intuition pump, if your workers could think 1e10 times faster, you’d be fully constrained on the margin by the need for more compute: more labour wouldn’t help at all but more compute could be fully utilised so the compute share would be ~1.
--> software singularity slightly less likely
--> overall these compute adjustments prob make me more pessimistic about the software singularity, compared to your assumptions
Taking it all together, i think you should put more probability on the software-only singluarity, mostly because of capability improvements being much more significant than you assume.
Yep, I think my estimates were too low based on these considerations and I’ve updated up accordingly. I updated down on your argument that maybe r decreases linearly as you approach optimal efficiency. (I think it probably doesn’t decrease linearly and instead drops faster towards the end based partially on thinking a bit about the dynamics and drawing on the example of what we’ve seen in semi-conductor improvement over time, but I’m not that confident.) Maybe I’m now at like 60% software-only is feasible given these arguments.
Isn’t this really implausible? This implies that if you had 1000 researchers/engineers of average skill at OpenAI doing AI R&D, this would be as good as having one average skill researcher running at 16x (10000.4) speed. It does seem very slightly plausible that having someone as good as the best researcher/engineer at OpenAI run at 16x speed would be competitive with OpenAI, but that isn’t what this term is computing. 0.2 is even more crazy, implying that 1000 researchers/engineers is as good as one researcher/engineer running at 4x speed!
I think 0.4 is far on the lower end (maybe 15th percentile) for all the way down to one accelerated researcher, but seems pretty plausible at the margin.
As in, 0.4 suggests that 1000 researchers = 100 researchers at 2.5x speed which seems kinda reasonable while 1000 researchers = 1 researcher at 16x speed does seem kinda crazy / implausible.
So, I think my current median lambda at likely margins is like 0.55 or something and 0.4 is also pretty plausible at the margin.
Ok, I think what is going on here is maybe that the constant you’re discussing here is different from the constant I was discussing. I was trying to discuss the question of how much worse serial labor is than parallel labor, but I think the lambda you’re talking about takes into account compute bottlenecks and similar?
Not totally sure.
I’m confused — I thought you put significantly less probability on software-only singularity than Ryan does? (Like half?) Maybe you were using a different bound for the number of OOMs of improvement?
Sorry, for my comments on this post I’ve been referring to “software only singularity?” only as “will the parameter r >1 when we f first fully automate AI RnD”, not as a threshold for some number of OOMs. That’s what Ryan’s analysis seemed to be referring to.
I separately think that even if initially r>1 the software explosion might not go on for that long
I’ll post about my views on different numbers of OOMs soon
I think Tom’s take is that he expects I will put more probability on software only singularity after updating on these considerations. It seems hard to isolate where Tom and I disagree based on this comment, but maybe it is on how much to weigh various considerations about compute being a key input.
Appendix: Estimating the relationship between algorithmic improvement and labor production
In particular, if we fix the architecture to use a token abstraction and consider training a new improved model: we care about how much cheaper you make generating tokens at a given level of performance (in inference tok/flop), how much serially faster you make generating tokens at a given level of performance (in serial speed: tok/s at a fixed level of tok/flop), and how much more performance you can get out of tokens (labor/tok, really per serial token). Then, for a given new model with reduced cost, increased speed, and increased production per token and assuming a parallelism penalty of 0.7, we can compute the increase in production as roughly: cost_reduction0.7⋅speed_increase(1−0.7)⋅productivity_multiplier[1] (I can show the math for this if there is interest).
My sense is that reducing inference compute needed for a fixed level of capability that you already have (using a fixed amount of training run) is usually somewhat easier than making frontier compute go further by some factor, though I don’t think it is easy to straightforwardly determine how much easier this is[2]. Let’s say there is a 1.25 exponent on reducing cost (as in, 2x algorithmic efficiency improvement is as hard as a 21.25=2.38 reduction in cost)? (I’m also generally pretty confused about what the exponent should be. I think exponents from 0.5 to 2 seem plausible, though I’m pretty confused. 0.5 would correspond to the square root from just scaling data in scaling laws.) It seems substantially harder to increase speed than to reduce cost as speed is substantially constrained by serial depth, at least when naively applying transformers. Naively, reducing cost by β (which implies reducing parameters by β) will increase speed by somewhat more than β1/3 as depth is cubic in layers. I expect you can do somewhat better than this because reduced matrix sizes also increase speed (it isn’t just depth) and because you can introduce speed-specific improvements (that just improve speed and not cost). But this factor might be pretty small, so let’s stick with 13 for now and ignore speed-specific improvements. Now, let’s consider the case where we don’t have productivity multipliers (which is strictly more conservative). Then, we get that increase in labor production is:
cost_reduction0.7⋅cost_reduction1/3⋅(1−0.7)=cost_reduction0.8=algo_improvement1.25⋅0.8=algo_improvement1
So, these numbers ended up yielding an exact equivalence between frontier algorithmic improvement and effective labor production increases. (This is a coincidence, though I do think the exponent is close to 1.)
In practice, we’ll be able to get slightly better returns by spending some of our resources investing in speed-specific improvements and in improving productivity rather than in reducing cost. I don’t currently have a principled way to estimate this (though I expect something roughly principled can be found by looking at trading off inference compute and training compute), but maybe I think this improves the returns to around algo_improvement1.1. If the coefficient on reducing cost was much worse, we would invest more in improving productivity per token, which bounds the returns somewhat.
Appendix: Isn’t compute tiny and decreasing per researcher?
One relevant objection is: Ok, but is this really feasible? Wouldn’t this imply that each AI researcher has only a tiny amount of compute? After all, if you use 20% of compute for inference of AI research labor, then each AI only gets 4x more compute to run experiments than for inference on itself? And, as you do algorithmic improvement to reduce AI cost and run more AIs, you also reduce the compute per AI! First, it is worth noting that as we do algorithmic progress, both the cost of AI researcher inference and the cost of experiments on models of a given level of capability go down. Precisely, for any experiment that involves a fixed number of inference or gradient steps on a model which is some fixed effective compute multiplier below/above the performance of our AI laborers, cost is proportional to inference cost (so, as we improve our AI workforce, experiment cost drops proportionally). However, for experiments that involve training a model from scratch, I expect the reduction in experiment cost to be relatively smaller such that such experiments must become increasingly small relative to frontier scale. Overall, it might be important to mostly depend on approaches which allow for experiments that don’t require training runs from scratch or to adapt to increasingly smaller full experiment training runs. To the extent AIs are made smarter rather than more numerous, this isn’t a concern. Additionally, we only need so many orders of magnitude of growth. In principle, this consideration should be captured by the exponents in the compute vs. labor production function, but it is possible this production function has very different characteristics in the extremes. Overall, I do think this concern is somewhat important, but I don’t think it is a dealbreaker for a substantial number of OOMs of growth.
Appendix: Can’t algorithmic efficiency only get so high?
My sense is that this isn’t very close to being a blocker. Here is a quick bullet point argument (from some slides I made) that takeover-capable AI is possible on current hardware.
Human brain is perhaps ~1e14 FLOP/s
With that efficiency, each H100 can run 10 humans (current cost $2 / hour)
10s of millions of human-level AIs with just current hardware production
Human brain is probably very suboptimal:
AIs already much better at many subtasks
Possible to do much more training than within lifetime training with parallelism
Biological issues: locality, noise, focused on sensory processing, memory limits
Smarter AI could be more efficient (smarter humans use less FLOP per task)
AI could be 1e2-1e7 more efficient on tasks like coding, engineering
Probably smaller improvement on video processing
Say, 1e4 so 100,000 per H100
Qualitative intelligence could be a big deal
Seems like peak efficiency isn’t a blocker.
This is just approximate because you can also trade off speed with cost in complicated ways and research new ways to more efficiently trade off speed and cost. I’ll be ignoring this for now.
It’s hard to determine because inference cost reductions have been driven by spending more compute on making smaller models e.g., training a smaller model for longer rather than just being driven by algorithmic improvement, and I don’t have great numbers on the difference off the top of my head.
Interesting comparison point: Tom thought this would give a way larger boost in his old software-only singularity appendix.
When considering an “efficiency only singularity”, some different estimates gets him r~=1; r~=1.5; r~=1.6. (Where r is defined so that “for each x% increase in cumulative R&D inputs, the output metric will increase by r*x”. The condition for increasing returns is r>1.)
Whereas when including capability improvements:
Though note that later in the appendix he adjusts down from 85% to 65% due to some further considerations. Also, last I heard, Tom was more like 25% on software singularity. (ETA: Or maybe not? See other comments in this thread.)
Interesting. My numbers aren’t very principled and I could imagine thinking capability improvements are a big deal for the bottom line.
Can you say roughly who the people surveyed were? (And if this was their raw guess or if you’ve modified it.)
I saw some polls from Daniel previously where I wasn’t sold that they were surveying people working on the most important capability improvements, so wondering if these are better.
Also, somewhat minor, but: I’m slightly concerned that surveys will overweight areas where labor is more useful relative to compute (because those areas should have disproportionately many humans working on them) and therefore be somewhat biased in the direction of labor being important.
I’m citing the polls from Daniel + what I’ve heard from random people + my guesses.
Ryan discusses this at more length in his 80K podcast.