Sometimes people think of “software-onlysingularity” as an important category of ways AI could go. A software-only singularity can roughly be defined as when you get increasing-returns growth (hyper-exponential) just via the mechanism of AIs increasing the labor input to AI capabilities software[1] R&D (i.e., keeping fixed the compute input to AI capabilities).
While the software-only singularity dynamic is an important part of my model, I often find it useful to more directly consider the outcome that software-only singularity might cause: the feasibility of takeover-capable AI without massive compute automation. That is, will the leading AI developer(s) be able to competitively develop AIs powerful enough to plausibly take over[2] without previously needing to use AI systems to massively (>10x) increase compute production[3]?
[This is by Ryan Greenblatt and Alex Mallen]
We care about whether the developers’ AI greatly increases compute production because this would require heavy integration into the global economy in a way that relatively clearly indicates to the world that AI is transformative. Greatly increasing compute production would require building additional fabs which currently involve substantial lead times, likely slowing down the transition from clearly transformative AI to takeover-capable AI.[4][5] In addition to economic integration, this would make the developer dependent on a variety of actors after the transformative nature of AI is made more clear, which would more broadly distribute power.
For example, if OpenAI is selling their AI’s labor to ASML and massively accelerating chip production before anyone has made takeover-capable AI, then (1) it would be very clear to the world that AI is transformatively useful and accelerating, (2) building fabs would be a constraint in scaling up AI which would slow progress, and (3) ASML and the Netherlands could have a seat at the table in deciding how AI goes (along with any other actors critical to OpenAI’s competitiveness). Given that AI is much more legibly transformatively powerful in this world, they might even want to push for measures to reduce AI/human takeover risk.
A software-only singularity is not necessary for developers to have takeover-capable AIs without having previously used them for massive compute automation (it is also not clearly sufficient, since it might be too slow or uncompetitive by default without massive compute automation as well). Instead, developers might be able to achieve this outcome by other forms of fast AI progress:
Algorithmic / scaling is fast enough at the relevant point independent of AI automation. This would likely be due to one of:
Downstream AI capabilities progress very rapidly with the default software and/or hardware progress rate at the relevant point;
Existing compute production (including repurposable production) suffices (this is sometimes called hardware overhang) and the developer buys a bunch more chips (after generating sufficient revenue or demoing AI capabilities to attract investment);
Or there is a large algorithmic advance that unlocks a new regime with fast progress due to low-hanging fruit.[6]
AI automation results in a one-time acceleration of software progress without causing an explosive feedback loop, but this does suffice for pushing AIs above the relevant capability threshold quickly.
Other developers just aren’t very competitive (due to secrecy, regulation, or other governance regimes) such that proceeding at a relatively slower rate (via algorithmic and hardware progress) suffices.
My inside view sense is that the feasibility of takeover-capable AI without massive compute automation is about 75% likely if we get AIs that dominate top-human-experts prior to 2040.[7] Further, I think that in practice, takeover-capable AI without massive compute automation is maybe about 60% likely. (This is because massively increasing compute production is difficult and slow, so if proceeding without massive compute automation is feasible, this would likely occur.) However, I’m reasonably likely to change these numbers on reflection due to updating about what level of capabilities would suffice for being capable of takeover (in the sense defined in an earlier footnote) and about the level of revenue and investment needed to 10x compute production. I’m also uncertain whether a substantially smaller scale-up than 10x (e.g., 3x) would suffice to cause the effects noted earlier.
To-date software progress has looked like “improvements in pre-training algorithms, data quality, prompting strategies, tooling, scaffolding” as described here.
This takeover could occur autonomously, via assisting the developers in a power grab, or via partnering with a US adversary. I’ll count it as “takeover” if the resulting coalition has de facto control of most resources. I’ll count an AI as takeover-capable if it would have a >25% chance of succeeding at a takeover (with some reasonable coalition) if no other actors had access to powerful AI systems. Further, this takeover wouldn’t be preventable with plausible interventions on legible human controlled institutions, so e.g., it doesn’t include the case where an AI lab is steadily building more powerful AIs for an eventual takeover much later (see discussion here). This 25% probability is as assessed under my views but with the information available to the US government at the time this AI is created. This line is intended to point at when states should be very worried about AI systems undermining their sovereignty unless action has already been taken. Note that insufficient inference compute could prevent an AI from being takeover-capable even if it could take over with enough parallel copies. And note that whether a given level of AI capabilities suffices for being takeover-capable is dependent on uncertain facts about how vulnerable the world seems (from the subjective vantage point I defined earlier). Takeover via the mechanism of an AI escaping, independently building more powerful AI that it controls, and then this more powerful AI taking over would count as that original AI that escaped taking over. I would also count a rogue internal deployment that leads to the AI successfully backdooring or controlling future AI training runs such that those future AIs take over. However, I would not count merely sabotaging safety research.
I mean 10x additional production (caused by AI labor) above long running trends in expanding compute production and making it more efficient. As in, spending on compute production has been increasing each year and the efficiency of compute production (in terms of FLOP/$ or whatever) has also been increasing over time, and I’m talking about going 10x above this trend due to using AI labor to expand compute production (either revenue from AI labor or having AIs directly work on chips as I’ll discuss in a later footnote).
Note that I don’t count converting fabs from making other chips (e.g., phones) to making AI chips as scaling up compute production; I’m just considering things that scale up the amount of AI chips we could somewhat readily produce. TSMC’s revenue is “only” about $100 billion per year, so if only converting fabs is needed, this could be done without automation of compute production and justified on the basis of AI revenues that are substantially smaller than the revenues that would justify building many more fabs. Currently AI is around 15% of leading node production at TSMC, so only a few more doublings are needed for it to consume most capacity.
Note that the AI could indirectly increase compute production via being sufficiently economically useful that it generates enough money to pay for greatly scaling up compute. I would count this as massive compute automation, though some routes through which the AI could be sufficiently economically useful might be less convincing of transformativeness than the AIs substantially automating the process of scaling up compute production. However, I would not count the case where AI systems are impressive enough to investors that this justifies investment that suffices for greatly scaling up fab capacity while profits/revenues wouldn’t suffice for greatly scaling up compute on their own. In reality, if compute is greatly scaled up, this will occur via a mixture of speculative investment, the AI earning revenue, and the AI directly working on automating labor along the compute supply chain. If the revenue and direct automation would suffice for an at least massive compute scale-up (>10x) on their own (removing the component from speculative investment), then I would count this as massive compute automation.
A large algorithmic advance isn’t totally unprecedented. It could suffice if we see an advance similar to what seemingly happened with reasoning models like o1 and o3 in 2024.
I’m not sure if the definition of takeover-capable-AI (abbreviated as “TCAI” for the rest of this comment) in footnote 2 quite makes sense. I’m worried that too much of the action is in “if no other actors had access to powerful AI systems”, and not that much action is in the exact capabilities of the “TCAI”. In particular: Maybe we already have TCAI (by that definition) because if a frontier AI company or a US adversary was blessed with the assumption “no other actor will have access to powerful AI systems”, they’d have a huge advantage over the rest of the world (as soon as they develop more powerful AI), plausibly implying that it’d be right to forecast a >25% chance of them successfully taking over if they were motivated to try.
And this seems somewhat hard to disentangle from stuff that is supposed to count according to footnote 2, especially: “Takeover via the mechanism of an AI escaping, independently building more powerful AI that it controls, and then this more powerful AI taking over would” and “via assisting the developers in a power grab, or via partnering with a US adversary”. (Or maybe the scenario in 1st paragraph is supposed to be excluded because current AI isn’t agentic enough to “assist”/”partner” with allies as supposed to just be used as a tool?)
What could a competing definition be? Thinking about what we care most about… I think two events especially stand out to me:
When would it plausibly be catastrophically bad for an adversary to steal an AI model?
When would it plausibly be catastrophically bad for an AI to be power-seeking and non-controlled?
Maybe a better definition would be to directly talk about these two events? So for example...
“Steal is catastrophic” would be true if...
“Frontier AI development projects immediately acquire good enough security to keep future model weights secure” has significantly less probability of AI-assisted takeover than
“Frontier AI development projects immediately have their weights stolen, and then acquire security that’s just as good as in (1a).”[1]
“Power-seeking and non-controlled is catastrophic” would be true if...
“Frontier AI development projects immediately acquire good enough judgment about power-seeking-risk that they henceforth choose to not deploy any model that would’ve been net-negative for them to deploy” has significantly less probability of AI-assisted takeover than
“Frontier AI development acquire the level of judgment described in (2a) 6 months later.”[2]
Where “significantly less probability of AI-assisted takeover” could be e.g. at least 2x less risk.
The motivation for assuming “future model weights secure” in both (1a) and (1b) is so that the downside of getting the model weights stolen imminently isn’t nullified by the fact that they’re very likely to get stolen a bit later, regardless. Because many interventions that would prevent model weight theft this month would also help prevent it future months. (And also, we can’t contrast 1a’=”model weights are permanently secure” with 1b’=”model weights get stolen and are then default-level-secure”, because that would already have a really big effect on takeover risk, purely via the effect on future model weights, even though current model weights probably aren’t that important.)
The motivation for assuming “good future judgment about power-seeking-risk” is similar to the motivation for assuming “future model weights secure” above. The motivation for choosing “good judgment about when to deploy vs. not” rather than “good at aligning/controlling future models” is that a big threat model is “misaligned AIs outcompete us because we don’t have any competitive aligned AIs, so we’re stuck between deploying misaligned AIs and being outcompeted” and I don’t want to assume away that threat model.
I agree that the notion of takeover-capable AI I use is problematic and makes the situation hard to reason about, but I intentionally rejected the notions you propose as they seemed even worse to think about from my perspective.
Is there some reason for why current AI isn’t TCAI by your definition?
(I’d guess that the best way to rescue your notion it is to stipulate that the TCAIs must have >25% probability of taking over themselves. Possibly with assistance from humans, possibly by manipulating other humans who think they’re being assisted by the AIs — but ultimately the original TCAIs should be holding the power in order for it to count. That would clearly exclude current systems. But I don’t think that’s how you meant it.)
Oh sorry. I somehow missed this aspect of your comment.
Here’s a definition of takeover-capable AI that I like: the AI is capable enough that plausible interventions on known human controlled institutions within a few months no longer suffice to prevent plausible takeover. (Which implies that making the situation clear to the world is substantially less useful and human controlled institutions can no longer as easily get a seat at the table.)
Under this definition, there are basically two relevant conditions:
The AI is capable enough to itself take over autonomously. (In the way you defined it, but also not in a way where intervening on human institutions can still prevent the takeover, so e.g.., the AI just having a rogue deployment within OpenAI doesn’t suffice if substantial externally imposed improvements to OpenAI’s security and controls would defeat the takeover attempt.)
Or human groups can do a nearly immediate takeover with the AI such that they could then just resist such interventions.
Hm — what are the “plausible interventions” that would stop China from having >25% probability of takeover if no other country could build powerful AI? Seems like you either need to count a delay as successful prevention, or you need to have a pretty low bar for “plausible”, because it seems extremely difficult/costly to prevent China from developing powerful AI in the long run. (Where they can develop their own supply chains, put manufacturing and data centers underground, etc.)
I really like the framing here, of asking whether we’ll see massive compute automation before [AI capability level we’re interested in]. I often hear people discuss nearby questions using IMO much more confusing abstractions, for example:
“How much is AI capabilities driven by algorithmic progress?” (problem: obscures dependence of algorithmic progress on compute for experimentation)
“How much AI progress can we get ‘purely from elicitation’?” (lots of problems, e.g. that eliciting a capability might first require a (possibly one-time) expenditure of compute for exploration)
My inside view sense is that the feasibility of takeover-capable AI without massive compute automation is about 75% likely if we get AIs that dominate top-human-experts prior to 2040.[6] Further, I think that in practice, takeover-capable AI without massive compute automation is maybe about 60% likely.
Is this because:
You think that we’re >50% likely to not get AIs that dominate top human experts before 2040? (I’d be surprised if you thought this.)
The words “the feasibility of” importantly change the meaning of your claim in the first sentence? (I’m guessing it’s this based on the following parenthetical, but I’m having trouble parsing.)
Overall, it seems like you put substantially higher probability than I do on getting takeover capable AI without massive compute automation (and especially on getting a software-only singularity). I’d be very interested in understanding why. A brief outline of why this doesn’t seem that likely to me:
My read of the historical trend is that AI progress has come from scaling up all of the factors of production in tandem (hardware, algorithms, compute expenditure, etc.).
Scaling up hardware production has always been slower than scaling up algorithms, so this consideration is already factored into the historical trends. I don’t see a reason to believe that algorithms will start running away with the game.
Maybe you could counter-argue that algorithmic progress has only reflected returns to scale from AI being applied to AI research in the last 12-18 months and that the data from this period is consistent with algorithms becoming more relatively important relative to other factors?
I don’t see a reason that “takeover-capable” is a capability level at which algorithmic progress will be deviantly important relative to this historical trend.
I’d be interested either in hearing you respond to this sketch or in sketching out your reasoning from scratch.
I put roughly 50% probability on feasibility of software-only singularity.[1]
(I’m probably going to be reinventing a bunch of the compute-centric takeoff model in slightly different ways below, but I think it’s faster to partially reinvent than to dig up the material, and I probably do use a slightly different approach.)
My argument here will be a bit sloppy and might contain some errors. Sorry about this. I might be more careful in the future.
The key question for software-only singularity is: “When the rate of labor production is doubled (as in, as if your employees ran 2x faster[2]), does that more than double or less than double the rate of algorithmic progress? That is, algorithmic progress as measured by how fast we increase the labor production per FLOP/s (as in, the labor production from AI labor on a fixed compute base).”. This is a very economics-style way of analyzing the situation, and I think this is a pretty reasonable first guess. Here’s a diagram I’ve stolen from Tom’s presentation on explosive growth illustrating this:
Basically, every time you double the AI labor supply, does the time until the next doubling (driven by algorithmic progress) increase (fizzle) or decrease (foom)? I’m being a bit sloppy in saying “AI labor supply”. We care about a notion of parallelism-adjusted labor (faster laborers are better than more laborers) and quality increases can also matter. I’ll make the relevant notion more precise below.
I’m about to go into a relatively complicated argument for why I think the historical data supports software-only singularity. If you want more basic questions answered (such as “Doesn’t retraining make this too slow?”), consider looking at Tom’s presentation on takeoff speeds.
Here’s a diagram that you might find useful in understanding the inputs into AI progress:
And here is the relevant historical context in terms of trends:
Historically, algorithmic progress in LLMs looks like 3-4x per year including improvements on all parts of the stack.[3] This notion of algorithmic progress is “reduction in compute needed to reach a given level of frontier performance”, which isn’t equivalent to increases in the rate of labor production on a fixed compute base. I’ll talk more about this below.
This has been accompanied by increases of around 4x more hardware per year[4] and maybe 2x more quality-adjusted (parallel) labor working on LLM capabilities per year. I think total employees working on LLM capabilities have been roughly 3x-ing per year (in recent years), but quality has been decreasing over time.
A 2x increase in the quality-adjusted parallel labor force isn’t as good as the company getting the same sorts of labor tasks done 2x faster (as in, the resulting productivity from having your employees run 2x faster) due to parallelism tax (putting aside compute bottlenecks for now). I’ll apply the same R&D parallelization penalty as used in Tom’s takeoff model and adjust this down by a power of 0.7 to yield 20.7= 1.6x per year in increased labor production rate. (So, it’s as though the company keeps the same employees, but those employees operate 1.6x faster each year.)
It looks like the fraction of progress driven by algorithmic progress has been getting larger over time.
So, overall, we’re getting 3-4x algorithmic improvement per year being driven by 1.6x more labor per year and 4x more hardware.
So, the key question is how much of this algorithmic improvement is being driven by labor vs. by hardware. If it is basically all hardware, then the returns to labor must be relatively weak and software-only singularity seems unlikely. If it is basically all labor, then we’re seeing 3-4x algorithmic improvement per year for 1.6x more labor per year, which means the returns to labor look quite good (at least historically). Based on some guesses and some poll questions, my sense is that capabilities researchers would operate about 2.5x slower if they had 10x less compute (after adaptation), so the production function is probably proportional to compute0.4⋅labor0.6 (0.4=log10(2.5)). (This is assuming a cobb-douglas production function.) Edit: see the derivation of the relevant thing in Deep’s comment, my old thing was wrong[5].
Now, let’s talk more about the transfer from algorithmic improvement to the rate of labor production. A 2x algorithmic improvement in LLMs makes it so that you can reach the same (frontier) level of performance for 2x less training compute, but we actually care about a somewhat different notion for software-only singularity: how much you can increase the production rate of labor (the thing that we said was increasing at roughly a rate of 1.6x per year by using more human employees). My current guess is that every 2x algorithmic improvement in LLMs increases the rate of labor production by 21.1, and I’m reasonably confident that the exponent isn’t much below 1.0. I don’t currently have a very principled estimation strategy for this, and it’s somewhat complex to reason about. I discuss this in the appendix below.
So, if this exponent is around 1, our central estimate of 2.3 from above corresponds to software-only singularity and our estimate of 1.56 from above under more pessimistic assumptions corresponds to this not being feasible. Overall, my sense is that the best guess numbers lean toward software-only singularity.
More precisely, software-only singularity that goes for >500x effective compute gains above trend (to the extent this metric makes sense, this is roughly >5 years of algorithmic progress). Note that you can have software-only singularity be feasible while buying tons more hardware at the same time. And if this ends up expanding compute production by >10x using AI labor, then this would count as massive compute production despite also having a feasible software-only singularity. (However, in most worlds, I expect software-only singularity to be fast enough, if feasible, that we don’t see this.)
Rather than denominating labor in accelerating employees, we could instead denominate in number of parallel employees. This would work equivalently (we can always convert into equivalents to the extent these things can funge), but because we can actually accelerate employees and the serial vs. parallel distinction is important, I think it is useful to denominate in accelerating employees.
I would have previously cited 3x, but recent progress looks substantially faster (with DeepSeek v3 and reasoning models seemingly indicating somewhat faster than 3x progress IMO), so I’ve revised to 3-4x.
This includes both increased spending and improved chips. Here, I’m taking my best guess at increases in hardware usage for training and transferring this to research compute usage on the assumption that training compute and research compute have historically been proportional.
Edit: the reasoning I did here was off. Here was the old text: so the production function is probably roughly α⋅compute0.4⋅labor0.6 (0.4=log10(2.5)). Increasing compute by 4x and labor by 1.6x increases algorithmic improvement by 3-4x, let’s say 3.5x, so we have 3.5=α⋅40.4⋅1.60.6, α=3.540.4⋅1.60.6=1.52. Thus, doubling labor would increase algorithmic improvement by 1.52⋅20.6=2.3. This is very sensitive to the exact numbers; if we instead used 3x slower instead of 2.5x slower, we would have gotten that doubling labor increases algorithmic improvement by 1.56, which is substantially lower. Obviously, all the exact numbers here are highly uncertain.
Hey Ryan! Thanks for writing this up—I think this whole topic is important and interesting.
I was confused about how your analysis related to the Epoch paper, so I spent a while with Claude analyzing it. I did a re-analysis that finds similar results, but also finds (I think) some flaws in your rough estimate. (Keep in mind I’m not an expert myself, and I haven’t closely read the Epoch paper, so I might well be making conceptual errors. I think the math is right though!)
I’ll walk through my understanding of this stuff first, then compare to your post. I’ll be going a little slowly (A) to help myself refresh myself via referencing this later, (B) to make it easy to call out mistakes, and (C) to hopefully make this legible to others who want to follow along.
Using Ryan’s empirical estimates in the Epoch model
The Epoch model
The Epoch paper models growth with the following equation: 1. d(lnA)dt∼A−βEλ,
where A = efficiency and E = research input. We want to consider worlds with a potential software takeoff, meaning that increases in AI efficiency directly feed into research input, which we model as d(lnA)dt∼A−βAλ=Aλ−β. So the key consideration seems to be the ratio λβ. If it’s 1, we get steady exponential growth from scaling inputs; greater, superexponential; smaller, subexponential.[1]
Fitting the model How can we learn about this ratio from historical data?
Let’s pretend history has been convenient and we’ve seen steady exponential growth in both variables, so A=A0ert and E=E0eqt. Then d(lnA)dthas been constant over time, so by equation 1, A(t)−βE(t)λ has been constant as well. Substituting in for A and E, we find that A0e−βrtE0eλqt is constant over time, which is only possible if βr=λq and the exponent is always zero. Thus if we’ve seen steady exponential growth, the historical value of our key ratio is:
2. λβ=rq.
Intuitively, if we’ve seen steady exponential growth while research input has increased more slowly than research output (AI efficiency), there are superlinear returns to scaling inputs.
Introducing the Cobb-Douglas function
But wait! E, research input, is an abstraction that we can’t directly measure. Really there’s both compute and labor inputs. Those have indeed been growing roughly exponentially, but at different rates.
Intuitively, it makes sense to say that “effective research input” has grown as some kind of weighted average of the rate of compute and labor input growth. This is my take on why a Cobb-Douglas function of form (3) E∼CpL1−p, with a weight parameter 0<p<1, is useful here: it’s a weighted geometric average of the two inputs, so its growth rate is a weighted average of their growth rates.
Writing that out: in general, say both inputs have grown exponentially, so C(t)=C0eqct and L(t)=L0eqlt. Then E has grown as E(t)=E0eqt=E0epqct+(1−p)qlt, so q is the weighted average (4) q=pqc+(1−p)ql of the growth rates of labor and capital.
Then, using Equation 2, we can estimate our key ratio λβ as rq=rpqc+(1−p)ql.
Let’s get empirical!
Plugging in your estimates:
Historical compute scaling of 4x/year gives qc=ln(4);
Historical labor scaling of 1.6x gives ql=ln(1.6);
Historical compute elasticity on research outputs of 0.4 gives p=0.4;
But wait: we’re not done yet! Under our Cobb-Douglas assumption, scaling labor by a factor of 2 isn’t as good as scaling all research inputs by a factor of 2; it’s only 20.6/2 as good.
Plugging in Equation 3 (which describes research input E in terms of compute and labor) to Equation 1 (which estimates AI progress A based on research), our adjusted form of the Epoch model is d(lnA)dt∼A−βEλ∼A−β∗Cpλ∗L(1−p)λ.
Under a software-only singularity, we hold compute constant while scaling labor with AI efficiency, so d(lnA)dt∼A(t)−β∗L(t)(1−p)λ multiplied by a fixed compute term. Since labor scales as A, we have d(lnA)dt=A−βtAλ(1−p)t=Aλ(1−p)t−βt. By the same analysis as in our first section, we can see A grows exponentially if λ(1−p)β=1, and grows grows superexponentially if this ratio is >1. So our key ratio λβ just gets multiplied by 1−p, and it wasn’t a waste to find it, phew!
Now we get the true form of our equation:we get a software-only foom iffλβ(1−p)>1, or (via equation 2) iff we see empirically that rq(1−p)>1. Call this the takeoff ratio: it corresponds to a) how much AI progress scales with inputs and b) how much of a penalty we take for not scaling compute.
Result: Above, we got λβ=1.5, so our takeoff ratio is 0.6∗1.5=.9. That’s quite close! If we think it’s more reasonable to think of a historical growth rate of 4 instead of 3.5, we’d increase our takeoff ratio by a factor of ln(4)ln(3.5)=1.1, to a ratio of .99, right on the knife edge of FOOM. [4][note: I previously had the wrong numbers here: I had lambda/beta = 1.6, which would mean the 4x/year case has a takeoff ratio of 1.05, putting it into FOOM land]
So this isn’t too far off from your results in terms of implications, but it is somewhat different (no FOOM for 3.5x, less sensitivity to the exact historical growth rate).
Analyzing your approach:
Tweaking alpha:
Your estimate of α is in fact similar in form to my ratio—rqbut what you’re calculating instead is α=er/eq=3.5/(40.4∗1.60.6).
One indicator that something’s wrong is that your result involves checking whether α∗21−p>2, or equivalently whether ln(α)+(1−p)ln(2)>ln(2), or equivalently whether ln(α)>p∗ln(2). But the choice of 2 is arbitrary—conceptually, you just want to check if scaling software by a factor n increases outputs by a factor n or more. Yet ln(α)−p∗ln(n) clearly varies with n.
One way of parsing the problem is that alpha is (implicitly) time dependent—it is equal to exp(r * 1 year) / exp(q * 1 year), a ratio of progress vs inputs in the time period of a year. If you calculated alpha based on a different amount of time, you’d get a different value. By contrast, r/q is a ratio of rates, so it stays the same regardless of what timeframe you use to measure it.[5]
Maybe I’m confused about what your Cobb-Douglas function is meant to be calculating—is it E within an Epoch-style takeoff model, or something else?
Nuances:
Does Cobb-Douglas make sense?
The geometric average of rates thing makes sense, but it feels weird that that simple intuitive approach leads to a functional form (Cobb-Douglas) that also has other implications.
Wikipedia says Cobb-Douglas functions can have the exponents not add to 1 (while both being between 0 and 1). Maybe this makes sense here? Not an expert.
How seriously should we take all this?
This whole thing relies on...
Assuming smooth historical trends
Assuming those trends continue in the future
And those trends themselves are based on functional fits to rough / unclear data.
It feels like this sort of thing is better than nothing, but I wish we had something better.
I really like the various nuances you’re adjusting for, like parallel vs serial scaling, and especially distinguishing algorithmic improvement from labor efficiency. [6] Thinking those things through makes this stuff feel less insubstantial and approximate...though the error bars still feel quite large.
Actually there’s a complexity here, which is that scaling labor alone may be less efficient than scaling “research inputs” which include both labor and compute. We’ll come to this in a few paragraphs.
I originally had 1.6 here, but as Ryan points out in a reply it’s actually 1.5. I’ve tried to reconstruct what I could have put into a calculator to get 1.6 instead, and I’m at a loss!
I was curious how aggressive the superexponential growth curve would be with a takeoff ratio of a mere 0.96∗1.1=1.056. A couple of Claude queries gave me different answers (maybe because the growth is so extreme that different solvers give meaningfully different approximations?), but they agreed that growth is fairly slow in the first year (~5x) and then hits infinity by the end of the second year. I wrote this comment with the wrong numbers (0.96 instead of 0.9), so it doesn’t accurately represent what you get if you plug in 4x capability growth per year. Still cool to get a sense of what these curves look like, though.
I think can be understood in terms of the alpha-being-implicitly-a-timescale-function thing—if you compare an alpha value with the ratio of growth you’re likely to see during the same time period, e.g. alpha(1 year) and n = one doubling, you probably get reasonable-looking results.
I find it annoying that people conflate “increased efficiency of doing known tasks” with “increased ability to do new useful tasks”. It seems to me that these could be importantly different, although it’s hard to even settle on a reasonable formalization of the latter. Some reasons this might be okay:
There’s a fuzzy conceptual boundary between the two: if GPT-n can do the task at 0.01% success rate, does that count as a “known task?” what about if it can do each of 10 components at 0.01% success, so in practice we’ll never see it succeed if run without human guidance, but we know it’s technically possible?
Under a software singularity situation, maybe the working hypothesis is that the model can do everything necessary to improve itself a bunch, maybe just not very efficiently yet. So we only need efficiency growth, not to increase the task set. That seems like a stronger assumption than most make, but maybe a reasonable weaker assumption is that the model will ‘unlock’ the necessary new tasks over time, after which point they become subject to rapid efficiency growth.
And empirically, we have in fact seen rapid unlocking of new capabilities, so it’s not crazy to approximate “being able to do new things” as a minor but manageable slowdown to the process of AI replacing human AI R&D labor.
I think you are correct with respect to my estimate of α and the associated model I was using. Sorry about my error here. I think I was fundamentally confusing a few things in my head when writing out the comment.
I think your refactoring of my strategy is correct and I tried to check it myself, though I don’t feel confident in verifying it is correct.
Your estimate doesn’t account for the conversion between algorithmic improvement and labor efficiency, but it is easy to add this in by just changing the historical algorithmic efficiency improvement of 3.5x/year to instead be the adjusted effective labor efficiency rate and then solving identically. I was previously thinking the relationship was that labor efficiency was around the same as algorithmic efficiency, but I now think this is more likely to be around algo_efficiency2 based on Tom’s comment.
Neat, thanks a ton for the algorithmic-vs-labor update—I appreciated that you’d distinguished those in your post, but I forgot to carry that through in mine! :)
And oops, I really don’t know how I got to 1.6 instead of 1.5 there. Thanks for the flag, have updated my comment accordingly!
The square relationship idea is interesting—that factor of 2 is a huge deal. Would be neat to see a Guesstimate or Squiggle version of this calculation that tries to account for the various nuances Tom mentions, and has error bars on each of the terms, so we both get a distribution of r and a sensitivity analysis. (Maybe @Tom Davidson already has this somewhere? If not I might try to make a crappy version myself, or poke talented folks I know to do a good version :)
It feels like this sort of thing is better than nothing, but I wish we had something better.
The existing epoch paper is pretty good, but doesn’t directly target LLMs in a way which seems somewhat sad.
The thing I’d be most excited about is:
Epoch does an in depth investigation using an estimation methodology which is directly targeting LLMs (rather than looking at returns in some other domains).
They use public data and solicit data from companies about algorithmic improvement, head count, compute on experiments etc.
(Some) companies provide this data. Epoch potentially doesn’t publish this exact data and instead just publishes the results of the final analysis to reduce capabilities externalities. (IMO, companies are somewhat unlikely to do this, but I’d like to be proven wrong!)
(I’m going through this and understanding where I made an error with my approach to α. I think I did make an error, but I’m trying to make sure I’m not still confused. Edit: I’ve figured this out, see my other comment.)
Wikipedia says Cobb-Douglas functions can have the exponents not add to 1 (while both being between 0 and 1). Maybe this makes sense here? Not an expert.
It shouldn’t matter in this case because we’re raising the whole value of E to λ.
Once AI has automated AI R&D, will software progress become faster or slower over time? This depends on the extent to which software improvements get harder to find as software improves – the steepness of the diminishing returns.
We can ask the following crucial empirical question:
When (cumulative) cognitive research inputs double, how many times does software double?
If the answer is “< 1”, then software progress will slow down over time. If the answer is “1”, software progress will remain at the same exponential rate. If the answer is “>1”, software progress will speed up over time.
The bolded question can be studied empirically, by looking at how many times software has doubled each time the human researcher population has doubled.
(What does it mean for “software” to double? A simple way of thinking about this is that software doubles when you can run twice as many copies of your AI with the same compute. But software improvements don’t just improve runtime efficiency: they also improve capabilities. To incorporate these improvements, we’ll ultimately need to make some speculative assumptions about how to translate capability improvements into an equivalently-useful runtime efficiency improvement..)
The best quality data on this question is Epoch’s analysis of computer vision training efficiency. They estimate r = ~1.4: every time the researcher population doubled, training efficiency doubled 1.4 times. (Epoch’s preliminary analysis indicates that the r value for LLMs would likely be somewhat higher.) We can use this as a starting point, and then make various adjustments:
Upwards for improving capabilities. Improving training efficiency improves capabilities, as you can train a model with more “effective compute”. To quantify this effect, imagine we use a 2X training efficiency gain to train a model with twice as much “effective compute”. How many times would that double “software”? (I.e., how many doublings of runtime efficiency would have the same effect?) There are various sources of evidence on how much capabilities improve every time training efficiency doubles: toy ML experiments suggest the answer is ~1.7; human productivity studies suggest the answer is ~2.5. I put more weight on the former, so I’ll estimate 2. This doubles my median estimate to r = ~2.8 (= 1.4 * 2).
Upwards for post-training enhancements. So far, we’ve only considered pre-training improvements. But post-training enhancements like fine-tuning, scaffolding, and prompting also improve capabilities (o1 was developed using such techniques!). It’s hard to say how large an increase we’ll get from post-training enhancements. These can allow faster thinking, which could be a big factor. But there might also be strong diminishing returns to post-training enhancements holding base models fixed. I’ll estimate a 1-2X increase, and adjust my median estimate to r = ~4 (2.8*1.45=4).
Downwards for less growth in compute for experiments. Today, rising compute means we can run increasing numbers of GPT-3-sized experiments each year. This helps drive software progress. But compute won’t be growing in our scenario. That might mean that returns to additional cognitive labour diminish more steeply. On the other hand, the most important experiments are ones that use similar amounts of compute to training a SOTA model. Rising compute hasn’t actually increased the number of these experiments we can run, as rising compute increases the training compute for SOTA models. And in any case, this doesn’t affect post-training enhancements. But this still reduces my median estimate down to r = ~3. (See Eth (forthcoming) for more discussion.)
Downwards for fixed scale of hardware. In recent years, the scale of hardware available to researchers has increased massively. Researchers could invent new algorithms that only work at the new hardware scales for which no one had previously tried to to develop algorithms. Researchers may have been plucking low-hanging fruit for each new scale of hardware. But in the software intelligence explosions I’m considering, this won’t be possible because the hardware scale will be fixed. OAI estimate ImageNet efficiency via a method that accounts for this (by focussing on a fixed capability level), and find a 16-month doubling time, as compared with Epoch’s 9-month doubling time. This reduces my estimate down to r = ~1.7 (3 * 9⁄16).
Downwards for diminishing returns becoming steeper over time. In most fields, returns diminish more steeply than in software R&D. So perhaps software will tend to become more like the average field over time. To estimate the size of this effect, we can take our estimate that software is ~10 OOMs from physical limits (discussed below), and assume that for each OOM increase in software, r falls by a constant amount, reaching zero once physical limits are reached. If r = 1.7, then this implies that r reduces by 0.17 for each OOM. Epoch estimates that pre-training algorithmic improvements are growing by an OOM every ~2 years, which would imply a reduction in r of 1.02 (6*0.17) by 2030. But when we include post-training enhancements, the decrease will be smaller (as [reason], perhaps ~0.5. This reduces my median estimate to r = ~1.2 (1.7-0.5).
Overall, my median estimate of r is 1.2. I use a log-uniform distribution with the bounds 3X higher and lower (0.4 to 3.6).
My sense is that I start with a higher r value due to the LLM case looking faster (and not feeling the need to adjust downward in a few places like you do in the LLM case). Obviously the numbers in the LLM case are much less certain given that I’m guessing based on qualitative improvement and looking at some open source models, but being closer to what we actually care about maybe overwhelms this.
I also think I’d get a slightly lower update on the diminishing returns case due to thinking it has a good chance of having substantially sharper dimishing returns as you get closer and closer rather than having linearly decreasing r (based on some first principles reasoning and my understanding of how returns diminished in the semi-conductor case).
But the biggest delta is that I think I wasn’t pricing in the importance of increasing capabilities. (Which seems especially important if you apply a large R&D parallelization penalty.)
Obviously the numbers in the LLM case are much less certain given that I’m guessing based on qualitative improvement and looking at some open source models,
Sorry,I don’t follow why they’re less certain?
based on some first principles reasoning and my understanding of how returns diminished in the semi-conductor case
I’d be interested to hear more about this. The semi conductor case is hard as we don’t know how far we are from limits, but if we use Landauer’s limit then I’d guess you’re right. There’s also uncertainty about how much alg progress we will and have met
I’m just eyeballing the rate of algorithmic progress while in the computer vision case, we can at least look at benchmarks and know the cost of training compute for various models.
My sense is that you have generalization issues in the compute vision case while in the frontier LLM case you have issues with knowing the actual numbers (in terms of number of employees and cost of training runs). I’m also just not carefully doing the accounting.
I’d be interested to hear more about this.
I don’t have much to say here sadly, but I do think investigating this could be useful.
Really appreciate you covering all these nuances, thanks Tom!
Can you give a pointer to the studies you mentioned here?
There are various sources of evidence on how much capabilities improve every time training efficiency doubles: toy ML experiments suggest the answer is ~1.7; human productivity studies suggest the answer is ~2.5. I put more weight on the former, so I’ll estimate 2. This doubles my median estimate to r = ~2.8 (= 1.4 * 2).
Here’s a simple argument I’d be keen to get your thoughts on: On the Possibility of a Tastularity
Research taste is the collection of skills including experiment ideation, literature review, experiment analysis, etc. that collectively determine how much you learn per experiment on average (perhaps alongside another factor accounting for inherent problem difficulty / domain difficulty, of course, and diminishing returns)
Human researchers seem to vary quite a bit in research taste—specifically, the difference between 90th percentile professional human researchers and the very best seems like maybe an order of magnitude? Depends on the field, etc. And the tails are heavy; there is no sign of the distribution bumping up against any limits.
Yet the causes of these differences are minor! Take the very best human researchers compared to the 90th percentile. They’ll have almost the same brain size, almost the same amount of experience, almost the same genes, etc. in the grand scale of things.
This means we should assume that if the human population were massively bigger, e.g. trillions of times bigger, there would be humans whose brains don’t look that different from the brains of the best researchers on Earth, and yet who are an OOM or more above the best Earthly scientists in research taste. -- AND it suggests that in the space of possible mind-designs, there should be minds which are e.g. within 3 OOMs of those brains in every dimension of interest, and which are significantly better still in the dimension of research taste. (How much better? Really hard to say. But it would be surprising if it was only, say, 1 OOM better, because that would imply that human brains are running up against the inherent limits of research taste within a 3-OOM mind design space, despite human evolution having only explored a tiny subspace of that space, and despite the human distribution showing no signs of bumping up against any inherent limits)
OK, so what? So, it seems like there’s plenty of room to improve research taste beyond human level. And research taste translates pretty directly into overall R&D speed, because it’s about how much experimentation you need to do to achieve a given amount of progress. With enough research taste, you don’t need to do experiments at all—or rather, you look at the experiments that have already been done, and you infer from them all you need to know to build the next design or whatever.
Anyhow, tying this back to your framework: What if the diminishing returns / increasing problem difficulty / etc. dynamics are such that, if you start from a top-human-expert-level automated researcher, and then do additional AI research to double its research taste, and then do additional AI research to double its research taste again, etc. the second doubling happens in less time than it took to get to the first doubling? Then you get a singularity in research taste (until these conditions change of course) -- the Tastularity.
How likely is the Tastularity? Well, again one piece of evidence here is the absurdly tiny differences between humans that translate to huge differences in research taste, and the heavy-tailed distribution. This suggests that we are far from any inherent limits on research taste even for brains roughly the shape and size and architecture of humans, and presumably the limits for a more relaxed (e.g. 3 OOM radius in dimensions like size, experience, architecture) space in mind-design are even farther away. It similarly suggests that there should be lots of hill-climbing that can be done to iteratively improve research taste.
How does this relate to software-singularity? Well, research taste is just one component of algorithmic progress; there is also speed, # of parallel copies & how well they coordinate, and maybe various other skills besides such as coding ability. So even if the Tastularity isn’t possible, improvements in taste will stack with improvements in those other areas, and the sum might cross the critical threshold.
In my framework, this is basically an argument that algorithmic-improvement-juice can be translated into a large improvement in AI R&D labor production via the mechanism of greatly increasing the productivity per “token” (or unit of thinking compute or whatever). See my breakdown here where I try to convert from historical algorithmic improvement to making AIs better at producing AI R&D research.
Your argument is basically that this taste mechanism might have higher returns than reducing cost to run more copies.
I agree this sort of argument means that returns to algorithmic improvement on AI R&D labor production might be bigger than you would otherwise think. This is both because this mechanism might be more promising than other mechanisms and even if it is somewhat less promising, diverse approaches make returns dimish less aggressively. (In my model, this means that best guess conversion might be more like algo_improvement1.3 rather than algo_improvement1.0.)
I think it might be somewhat tricky to train AIs to have very good research taste, but this doesn’t seem that hard via training them on various prediction objectives.
At a more basic level, I expect that training AIs to predict the results of experiments and then running experiments based on value of information as estimated partially based on these predictions (and skipping experiments with certain results and more generally using these predictions to figure out what to do) seems pretty promising. It’s really hard to train humans to predict the results of tens of thousands of experiments (both small and large), but this is relatively clean outcomes based feedback for AIs.
I don’t really have a strong inside view on how much the “AI R&D research taste” mechanism increases the returns to algorithmic progress.
I’ll paste my own estimate for this param in a different reply.
But here are the places I most differ from you:
Bigger adjustment for ‘smarter AI’. You’ve argue in your appendix that, only including ‘more efficient’ and ‘faster’ AI, you think the software-only singularity goes through. I think including ‘smarter’ AI makes a big difference. This evidence suggests that doubling training FLOP doubles output-per-FLOP 1-2 times. In addition, algorithmic improvements will improve runtime efficiency. So overall I think a doubling of algorithms yields ~two doublings of (parallel) cognitive labour.
--> software singularity more likely
Lower lambda. I’d now use more like lambda = 0.4 as my median. There’s really not much evidence pinning this down; I think Tamay Besiroglu thinks there’s some evidence for values as low as 0.2. This will decrease the observed historical increase in human workers more than it decreases the gains from algorithmic progress (bc of speed improvements)
--> software singularity slightly more likely
Complications thinking about compute which might be a wash.
Number of useful-experiments has increased by less than 4X/year. You say compute inputs have been increasing at 4X. But simultaneously the scale of experiments ppl must run to be near to the frontier has increased by a similar amount. So the number of near-frontier experiments has not increased at all.
This argument would be right if the ‘usefulness’ of an experiment depends solely on how much compute it uses compared to training a frontier model. I.e. experiment_usefulness = log(experiment_compute / frontier_model_training_compute). The 4X/year increases the numerator and denominator of the expression, so there’s no change in usefulness-weighted experiments.
That might be false. GPT-2-sized experiments might in some ways be equally useful even as frontier model size increases. Maybe a better expression would be experiment_usefulness = alpha * log(experiment_compute / frontier_model_training_compute) + beta * log(experiment_compute). In this case, the number of usefulness-weighted experiments has increased due to the second term.
--> software singularity slightly more likely
Steeper diminishing returns during software singularity. Recent algorithmic progress has grabbed low-hanging fruit from new hardware scales. During a software-only singularity that won’t be possible. You’ll have to keep finding new improvements on the same hardware scale. Returns might diminish more quickly as a result.
--> software singularity slightly less likely
Compute share might increase as it becomes scarce. You estimate a share of 0.4 for compute, which seems reasonable. But it might fall over time as compute becomes a bottleneck. As an intuition pump, if your workers could think 1e10 times faster, you’d be fully constrained on the margin by the need for more compute: more labour wouldn’t help at all but more compute could be fully utilised so the compute share would be ~1.
--> software singularity slightly less likely
--> overall these compute adjustments prob make me more pessimistic about the software singularity, compared to your assumptions
Taking it all together, i think you should put more probability on the software-only singluarity, mostly because of capability improvements being much more significant than you assume.
Yep, I think my estimates were too low based on these considerations and I’ve updated up accordingly. I updated down on your argument that maybe r decreases linearly as you approach optimal efficiency. (I think it probably doesn’t decrease linearly and instead drops faster towards the end based partially on thinking a bit about the dynamics and drawing on the example of what we’ve seen in semi-conductor improvement over time, but I’m not that confident.) Maybe I’m now at like 60% software-only is feasible given these arguments.
Lower lambda. I’d now use more like lambda = 0.4 as my median. There’s really not much evidence pinning this down; I think Tamay Besiroglu thinks there’s some evidence for values as low as 0.2.
Isn’t this really implausible? This implies that if you had 1000 researchers/engineers of average skill at OpenAI doing AI R&D, this would be as good as having one average skill researcher running at 16x (10000.4) speed. It does seem very slightly plausible that having someone as good as the best researcher/engineer at OpenAI run at 16x speed would be competitive with OpenAI, but that isn’t what this term is computing. 0.2 is even more crazy, implying that 1000 researchers/engineers is as good as one researcher/engineer running at 4x speed!
I think 0.4 is far on the lower end (maybe 15th percentile) for all the way down to one accelerated researcher, but seems pretty plausible at the margin.
As in, 0.4 suggests that 1000 researchers = 100 researchers at 2.5x speed which seems kinda reasonable while 1000 researchers = 1 researcher at 16x speed does seem kinda crazy / implausible.
So, I think my current median lambda at likely margins is like 0.55 or something and 0.4 is also pretty plausible at the margin.
Ok, I think what is going on here is maybe that the constant you’re discussing here is different from the constant I was discussing. I was trying to discuss the question of how much worse serial labor is than parallel labor, but I think the lambda you’re talking about takes into account compute bottlenecks and similar?
Taking it all together, i think you should put more probability on the software-only singluarity, mostly because of capability improvements being much more significant than you assume.
I’m confused — I thought you put significantly less probability on software-only singularity than Ryan does? (Like half?) Maybe you were using a different bound for the number of OOMs of improvement?
Sorry, for my comments on this post I’ve been referring to “software only singularity?” only as “will the parameter r >1 when we f first fully automate AI RnD”, not as a threshold for some number of OOMs. That’s what Ryan’s analysis seemed to be referring to.
I separately think that even if initially r>1 the software explosion might not go on for that long
I think Tom’s take is that he expects I will put more probability on software only singularity after updating on these considerations. It seems hard to isolate where Tom and I disagree based on this comment, but maybe it is on how much to weigh various considerations about compute being a key input.
Appendix: Estimating the relationship between algorithmic improvement and labor production
In particular, if we fix the architecture to use a token abstraction and consider training a new improved model: we care about how much cheaper you make generating tokens at a given level of performance (in inference tok/flop), how much serially faster you make generating tokens at a given level of performance (in serial speed: tok/s at a fixed level of tok/flop), and how much more performance you can get out of tokens (labor/tok, really per serial token). Then, for a given new model with reduced cost, increased speed, and increased production per token and assuming a parallelism penalty of 0.7, we can compute the increase in production as roughly: cost_reduction0.7⋅speed_increase(1−0.7)⋅productivity_multiplier[1] (I can show the math for this if there is interest).
My sense is that reducing inference compute needed for a fixed level of capability that you already have (using a fixed amount of training run) is usually somewhat easier than making frontier compute go further by some factor, though I don’t think it is easy to straightforwardly determine how much easier this is[2]. Let’s say there is a 1.25 exponent on reducing cost (as in, 2x algorithmic efficiency improvement is as hard as a 21.25=2.38 reduction in cost)? (I’m also generally pretty confused about what the exponent should be. I think exponents from 0.5 to 2 seem plausible, though I’m pretty confused. 0.5 would correspond to the square root from just scaling data in scaling laws.) It seems substantially harder to increase speed than to reduce cost as speed is substantially constrained by serial depth, at least when naively applying transformers. Naively, reducing cost by β (which implies reducing parameters by β) will increase speed by somewhat more than β1/3 as depth is cubic in layers. I expect you can do somewhat better than this because reduced matrix sizes also increase speed (it isn’t just depth) and because you can introduce speed-specific improvements (that just improve speed and not cost). But this factor might be pretty small, so let’s stick with 13 for now and ignore speed-specific improvements. Now, let’s consider the case where we don’t have productivity multipliers (which is strictly more conservative). Then, we get that increase in labor production is:
So, these numbers ended up yielding an exact equivalence between frontier algorithmic improvement and effective labor production increases. (This is a coincidence, though I do think the exponent is close to 1.)
In practice, we’ll be able to get slightly better returns by spending some of our resources investing in speed-specific improvements and in improving productivity rather than in reducing cost. I don’t currently have a principled way to estimate this (though I expect something roughly principled can be found by looking at trading off inference compute and training compute), but maybe I think this improves the returns to around algo_improvement1.1. If the coefficient on reducing cost was much worse, we would invest more in improving productivity per token, which bounds the returns somewhat.
Appendix: Isn’t compute tiny and decreasing per researcher?
One relevant objection is: Ok, but is this really feasible? Wouldn’t this imply that each AI researcher has only a tiny amount of compute? After all, if you use 20% of compute for inference of AI research labor, then each AI only gets 4x more compute to run experiments than for inference on itself? And, as you do algorithmic improvement to reduce AI cost and run more AIs, you also reduce the compute per AI! First, it is worth noting that as we do algorithmic progress, both the cost of AI researcher inference and the cost of experiments on models of a given level of capability go down. Precisely, for any experiment that involves a fixed number of inference or gradient steps on a model which is some fixed effective compute multiplier below/above the performance of our AI laborers, cost is proportional to inference cost (so, as we improve our AI workforce, experiment cost drops proportionally). However, for experiments that involve training a model from scratch, I expect the reduction in experiment cost to be relatively smaller such that such experiments must become increasingly small relative to frontier scale. Overall, it might be important to mostly depend on approaches which allow for experiments that don’t require training runs from scratch or to adapt to increasingly smaller full experiment training runs. To the extent AIs are made smarter rather than more numerous, this isn’t a concern. Additionally, we only need so many orders of magnitude of growth. In principle, this consideration should be captured by the exponents in the compute vs. labor production function, but it is possible this production function has very different characteristics in the extremes. Overall, I do think this concern is somewhat important, but I don’t think it is a dealbreaker for a substantial number of OOMs of growth.
Appendix: Can’t algorithmic efficiency only get so high?
My sense is that this isn’t very close to being a blocker. Here is a quick bullet point argument (from some slides I made) that takeover-capable AI is possible on current hardware.
Human brain is perhaps ~1e14 FLOP/s
With that efficiency, each H100 can run 10 humans (current cost $2 / hour)
10s of millions of human-level AIs with just current hardware production
Human brain is probably very suboptimal:
AIs already much better at many subtasks
Possible to do much more training than within lifetime training with parallelism
Biological issues: locality, noise, focused on sensory processing, memory limits
Smarter AI could be more efficient (smarter humans use less FLOP per task)
AI could be 1e2-1e7 more efficient on tasks like coding, engineering
This is just approximate because you can also trade off speed with cost in complicated ways and research new ways to more efficiently trade off speed and cost. I’ll be ignoring this for now.
It’s hard to determine because inference cost reductions have been driven by spending more compute on making smaller models e.g., training a smaller model for longer rather than just being driven by algorithmic improvement, and I don’t have great numbers on the difference off the top of my head.
In practice, we’ll be able to get slightly better returns by spending some of our resources investing in speed-specific improvements and in improving productivity rather than in reducing cost. I don’t currently have a principled way to estimate this (though I expect something roughly principled can be found by looking at trading off inference compute and training compute), but maybe I think this improves the returns to around algo_improvement1.1.
When considering an “efficiency only singularity”, some different estimates gets him r~=1; r~=1.5; r~=1.6. (Where r is defined so that “for each x% increase in cumulative R&D inputs, the output metric will increase by r*x”. The condition for increasing returns is r>1.)
I said I was 50-50 on an efficiency only singularity happening, at least temporarily. Based on these additional considerations I’m now at more like ~85% on a software only singularity. And I’d guess that initially r = ~3 (though I still think values as low as 0.5 or as high as 6 as plausible). There seem to be many strong ~independent reasons to think capability improvements would be a really huge deal compared to pure efficiency problems, and this is borne out by toy models of the dynamic.
Though note that later in the appendix he adjusts down from 85% to 65% due to some further considerations. Also, last I heard, Tom was more like 25% on software singularity. (ETA: Or maybe not? See other comments in this thread.)
Based on some guesses and some poll questions, my sense is that capabilities researchers would operate about 2.5x slower if they had 10x less compute (after adaptation)
Can you say roughly who the people surveyed were? (And if this was their raw guess or if you’ve modified it.)
I saw some polls from Daniel previously where I wasn’t sold that they were surveying people working on the most important capability improvements, so wondering if these are better.
Also, somewhat minor, but: I’m slightly concerned that surveys will overweight areas where labor is more useful relative to compute (because those areas should have disproportionately many humans working on them) and therefore be somewhat biased in the direction of labor being important.
I think your outline of an argument against contains an important error.
Scaling up hardware production has always been slower than scaling up algorithms, so this consideration is already factored into the historical trends. I don’t see a reason to believe that algorithms will start running away with the game.
Importantly, while the spending on hardware for individual AI companies has increased by roughly 3-4x each year[1], this has not been driven by scaling up hardware production by 3-4x per year. Instead, total compute production (in terms of spending, building more fabs, etc.) has been increased by a much smaller amount each year, but a higher and higher fraction of that compute production was used for AI. In particular, my understanding is that roughly ~20% of TSMC’s volume is now AI while it used to be much lower. So, the fact that scaling up hardware production is much slower than scaling up algorithms hasn’t bitten yet and this isn’t factored into the historical trends.
Another way to put this is that the exact current regime can’t go on. If trends continue, then >100% of TSMC’s volume will be used for AI by 2027!
Only if building takeover-capable AIs happens by scaling up TSMC to >1000% of what their potential FLOP output volume would have otherwise been, then does this count as “massive compute automation” in my operationalization. (And without such a large build-out, the economic impacts and dependency of the hardware supply chain (at the critical points) could be relatively small.) So, massive compute automation requires something substantially off trend from TSMC’s perspective.
[Low importance] It is only possible to build takeover-capable AI without previously breaking an important trend prior to around 2030 (based on my rough understanding). Either the hardware spending trend must break or TSMC production must go substantially above the trend by then. If takeover-capable AI is built prior to 2030, it could occur without substantial trend breaks but this gets somewhat crazy towards the end of the timeline: hardware spending keeps increasing at ~3x for each actor (but there is some consolidation and acquisition of previously produced hardware yielding a one-time increase up to about 10x which buys another 2 years for this trend), algorithmic progress remains steady at ~3-4x per year, TSMC expands production somewhat faster than previously, but not substantially above trend, and these suffice for getting sufficiently powerful AI. In this scenario, this wouldn’t count as massive compute automation.
The spending on training runs has increased by 4-5x according to epoch, but part of this is making training runs go longer, which means the story for overall spending is more complex. We care about the overall spend on hardware, not just the spend on training runs.
Thanks, this is helpful. So it sounds like you expect
AI progress which is slower than the historical trendline (though perhaps fast in absolute terms) because we’ll soon have finished eating through the hardware overhang
separately, takeover-capable AI soon (i.e. before hardware manufacturers have had a chance to scale substantially).
It seems like all the action is taking place in (2). Even if (1) is wrong (i.e. even if we see substantially increased hardware production soon), that makes takeover-capable AI happen faster than expected; IIUC, this contradicts the OP, which seems to expect takeover-capable AI to happen later if it’s preceded by substantial hardware scaling.
In other words, it seems like in the OP you care about whether takeover-capable AI will be preceded by massive compute automation because:
[this point still holds up] this affects how legible it is that AI is a transformative technology
[it’s not clear to me this point holds up] takeover-capable AI being preceded by compute automation probably means longer timelines
The second point doesn’t clearly hold up because if we don’t see massive compute automation, this suggests that AI progress slower than the historical trend.
I don’t think (2) is a crux (as discussed in person). I expect that if takeover-capable AI takes a while (e.g. it happens in 2040), then we will have a long winter where economic value from AI doesn’t increase that fast followed a period of faster progress around 2040. If progress is relatively stable accross this entire period, then we’ll have enough time to scale up fabs. Even if progress isn’t stable, we could see enough total value from AI in the slower growth period to scale up to scale up fabs by 10x, but this would require >>$1 trillion of economic value per year I think (which IMO seems not that likely to come far before takeover-capable AI due to views about economic returns to AI and returns to scaling up compute).
The words “the feasibility of” importantly change the meaning of your claim in the first sentence? (I’m guessing it’s this based on the following parenthetical, but I’m having trouble parsing.)
I think this happening in practice is about 60% likely, so I don’t think feasibility vs. in practice is a huge delta.
Sometimes people think of “software-only singularity” as an important category of ways AI could go. A software-only singularity can roughly be defined as when you get increasing-returns growth (hyper-exponential) just via the mechanism of AIs increasing the labor input to AI capabilities software[1] R&D (i.e., keeping fixed the compute input to AI capabilities).
While the software-only singularity dynamic is an important part of my model, I often find it useful to more directly consider the outcome that software-only singularity might cause: the feasibility of takeover-capable AI without massive compute automation. That is, will the leading AI developer(s) be able to competitively develop AIs powerful enough to plausibly take over[2] without previously needing to use AI systems to massively (>10x) increase compute production[3]?
[This is by Ryan Greenblatt and Alex Mallen]
We care about whether the developers’ AI greatly increases compute production because this would require heavy integration into the global economy in a way that relatively clearly indicates to the world that AI is transformative. Greatly increasing compute production would require building additional fabs which currently involve substantial lead times, likely slowing down the transition from clearly transformative AI to takeover-capable AI.[4][5] In addition to economic integration, this would make the developer dependent on a variety of actors after the transformative nature of AI is made more clear, which would more broadly distribute power.
For example, if OpenAI is selling their AI’s labor to ASML and massively accelerating chip production before anyone has made takeover-capable AI, then (1) it would be very clear to the world that AI is transformatively useful and accelerating, (2) building fabs would be a constraint in scaling up AI which would slow progress, and (3) ASML and the Netherlands could have a seat at the table in deciding how AI goes (along with any other actors critical to OpenAI’s competitiveness). Given that AI is much more legibly transformatively powerful in this world, they might even want to push for measures to reduce AI/human takeover risk.
A software-only singularity is not necessary for developers to have takeover-capable AIs without having previously used them for massive compute automation (it is also not clearly sufficient, since it might be too slow or uncompetitive by default without massive compute automation as well). Instead, developers might be able to achieve this outcome by other forms of fast AI progress:
Algorithmic / scaling is fast enough at the relevant point independent of AI automation. This would likely be due to one of:
Downstream AI capabilities progress very rapidly with the default software and/or hardware progress rate at the relevant point;
Existing compute production (including repurposable production) suffices (this is sometimes called hardware overhang) and the developer buys a bunch more chips (after generating sufficient revenue or demoing AI capabilities to attract investment);
Or there is a large algorithmic advance that unlocks a new regime with fast progress due to low-hanging fruit.[6]
AI automation results in a one-time acceleration of software progress without causing an explosive feedback loop, but this does suffice for pushing AIs above the relevant capability threshold quickly.
Other developers just aren’t very competitive (due to secrecy, regulation, or other governance regimes) such that proceeding at a relatively slower rate (via algorithmic and hardware progress) suffices.
My inside view sense is that the feasibility of takeover-capable AI without massive compute automation is about 75% likely if we get AIs that dominate top-human-experts prior to 2040.[7] Further, I think that in practice, takeover-capable AI without massive compute automation is maybe about 60% likely. (This is because massively increasing compute production is difficult and slow, so if proceeding without massive compute automation is feasible, this would likely occur.) However, I’m reasonably likely to change these numbers on reflection due to updating about what level of capabilities would suffice for being capable of takeover (in the sense defined in an earlier footnote) and about the level of revenue and investment needed to 10x compute production. I’m also uncertain whether a substantially smaller scale-up than 10x (e.g., 3x) would suffice to cause the effects noted earlier.
To-date software progress has looked like “improvements in pre-training algorithms, data quality, prompting strategies, tooling, scaffolding” as described here.
This takeover could occur autonomously, via assisting the developers in a power grab, or via partnering with a US adversary. I’ll count it as “takeover” if the resulting coalition has de facto control of most resources. I’ll count an AI as takeover-capable if it would have a >25% chance of succeeding at a takeover (with some reasonable coalition) if no other actors had access to powerful AI systems. Further, this takeover wouldn’t be preventable with plausible interventions on legible human controlled institutions, so e.g., it doesn’t include the case where an AI lab is steadily building more powerful AIs for an eventual takeover much later (see discussion here). This 25% probability is as assessed under my views but with the information available to the US government at the time this AI is created. This line is intended to point at when states should be very worried about AI systems undermining their sovereignty unless action has already been taken. Note that insufficient inference compute could prevent an AI from being takeover-capable even if it could take over with enough parallel copies. And note that whether a given level of AI capabilities suffices for being takeover-capable is dependent on uncertain facts about how vulnerable the world seems (from the subjective vantage point I defined earlier). Takeover via the mechanism of an AI escaping, independently building more powerful AI that it controls, and then this more powerful AI taking over would count as that original AI that escaped taking over. I would also count a rogue internal deployment that leads to the AI successfully backdooring or controlling future AI training runs such that those future AIs take over. However, I would not count merely sabotaging safety research.
I mean 10x additional production (caused by AI labor) above long running trends in expanding compute production and making it more efficient. As in, spending on compute production has been increasing each year and the efficiency of compute production (in terms of FLOP/$ or whatever) has also been increasing over time, and I’m talking about going 10x above this trend due to using AI labor to expand compute production (either revenue from AI labor or having AIs directly work on chips as I’ll discuss in a later footnote).
Note that I don’t count converting fabs from making other chips (e.g., phones) to making AI chips as scaling up compute production; I’m just considering things that scale up the amount of AI chips we could somewhat readily produce. TSMC’s revenue is “only” about $100 billion per year, so if only converting fabs is needed, this could be done without automation of compute production and justified on the basis of AI revenues that are substantially smaller than the revenues that would justify building many more fabs. Currently AI is around 15% of leading node production at TSMC, so only a few more doublings are needed for it to consume most capacity.
Note that the AI could indirectly increase compute production via being sufficiently economically useful that it generates enough money to pay for greatly scaling up compute. I would count this as massive compute automation, though some routes through which the AI could be sufficiently economically useful might be less convincing of transformativeness than the AIs substantially automating the process of scaling up compute production. However, I would not count the case where AI systems are impressive enough to investors that this justifies investment that suffices for greatly scaling up fab capacity while profits/revenues wouldn’t suffice for greatly scaling up compute on their own. In reality, if compute is greatly scaled up, this will occur via a mixture of speculative investment, the AI earning revenue, and the AI directly working on automating labor along the compute supply chain. If the revenue and direct automation would suffice for an at least massive compute scale-up (>10x) on their own (removing the component from speculative investment), then I would count this as massive compute automation.
A large algorithmic advance isn’t totally unprecedented. It could suffice if we see an advance similar to what seemingly happened with reasoning models like o1 and o3 in 2024.
About 2⁄3 of this is driven by software-only singularity.
I’m not sure if the definition of takeover-capable-AI (abbreviated as “TCAI” for the rest of this comment) in footnote 2 quite makes sense. I’m worried that too much of the action is in “if no other actors had access to powerful AI systems”, and not that much action is in the exact capabilities of the “TCAI”. In particular: Maybe we already have TCAI (by that definition) because if a frontier AI company or a US adversary was blessed with the assumption “no other actor will have access to powerful AI systems”, they’d have a huge advantage over the rest of the world (as soon as they develop more powerful AI), plausibly implying that it’d be right to forecast a >25% chance of them successfully taking over if they were motivated to try.
And this seems somewhat hard to disentangle from stuff that is supposed to count according to footnote 2, especially: “Takeover via the mechanism of an AI escaping, independently building more powerful AI that it controls, and then this more powerful AI taking over would” and “via assisting the developers in a power grab, or via partnering with a US adversary”. (Or maybe the scenario in 1st paragraph is supposed to be excluded because current AI isn’t agentic enough to “assist”/”partner” with allies as supposed to just be used as a tool?)
What could a competing definition be? Thinking about what we care most about… I think two events especially stand out to me:
When would it plausibly be catastrophically bad for an adversary to steal an AI model?
When would it plausibly be catastrophically bad for an AI to be power-seeking and non-controlled?
Maybe a better definition would be to directly talk about these two events? So for example...
“Steal is catastrophic” would be true if...
“Frontier AI development projects immediately acquire good enough security to keep future model weights secure” has significantly less probability of AI-assisted takeover than
“Frontier AI development projects immediately have their weights stolen, and then acquire security that’s just as good as in (1a).”[1]
“Power-seeking and non-controlled is catastrophic” would be true if...
“Frontier AI development projects immediately acquire good enough judgment about power-seeking-risk that they henceforth choose to not deploy any model that would’ve been net-negative for them to deploy” has significantly less probability of AI-assisted takeover than
“Frontier AI development acquire the level of judgment described in (2a) 6 months later.”[2]
Where “significantly less probability of AI-assisted takeover” could be e.g. at least 2x less risk.
The motivation for assuming “future model weights secure” in both (1a) and (1b) is so that the downside of getting the model weights stolen imminently isn’t nullified by the fact that they’re very likely to get stolen a bit later, regardless. Because many interventions that would prevent model weight theft this month would also help prevent it future months. (And also, we can’t contrast 1a’=”model weights are permanently secure” with 1b’=”model weights get stolen and are then default-level-secure”, because that would already have a really big effect on takeover risk, purely via the effect on future model weights, even though current model weights probably aren’t that important.)
The motivation for assuming “good future judgment about power-seeking-risk” is similar to the motivation for assuming “future model weights secure” above. The motivation for choosing “good judgment about when to deploy vs. not” rather than “good at aligning/controlling future models” is that a big threat model is “misaligned AIs outcompete us because we don’t have any competitive aligned AIs, so we’re stuck between deploying misaligned AIs and being outcompeted” and I don’t want to assume away that threat model.
I agree that the notion of takeover-capable AI I use is problematic and makes the situation hard to reason about, but I intentionally rejected the notions you propose as they seemed even worse to think about from my perspective.
Is there some reason for why current AI isn’t TCAI by your definition?
(I’d guess that the best way to rescue your notion it is to stipulate that the TCAIs must have >25% probability of taking over themselves. Possibly with assistance from humans, possibly by manipulating other humans who think they’re being assisted by the AIs — but ultimately the original TCAIs should be holding the power in order for it to count. That would clearly exclude current systems. But I don’t think that’s how you meant it.)
Oh sorry. I somehow missed this aspect of your comment.
Here’s a definition of takeover-capable AI that I like: the AI is capable enough that plausible interventions on known human controlled institutions within a few months no longer suffice to prevent plausible takeover. (Which implies that making the situation clear to the world is substantially less useful and human controlled institutions can no longer as easily get a seat at the table.)
Under this definition, there are basically two relevant conditions:
The AI is capable enough to itself take over autonomously. (In the way you defined it, but also not in a way where intervening on human institutions can still prevent the takeover, so e.g.., the AI just having a rogue deployment within OpenAI doesn’t suffice if substantial externally imposed improvements to OpenAI’s security and controls would defeat the takeover attempt.)
Or human groups can do a nearly immediate takeover with the AI such that they could then just resist such interventions.
I’ll clarify this in the comment.
Hm — what are the “plausible interventions” that would stop China from having >25% probability of takeover if no other country could build powerful AI? Seems like you either need to count a delay as successful prevention, or you need to have a pretty low bar for “plausible”, because it seems extremely difficult/costly to prevent China from developing powerful AI in the long run. (Where they can develop their own supply chains, put manufacturing and data centers underground, etc.)
Yeah, I’m trying to include delay as fine.
I’m just trying to point at “the point when aggressive intervention by a bunch of parties is potentially still too late”.
I really like the framing here, of asking whether we’ll see massive compute automation before [AI capability level we’re interested in]. I often hear people discuss nearby questions using IMO much more confusing abstractions, for example:
“How much is AI capabilities driven by algorithmic progress?” (problem: obscures dependence of algorithmic progress on compute for experimentation)
“How much AI progress can we get ‘purely from elicitation’?” (lots of problems, e.g. that eliciting a capability might first require a (possibly one-time) expenditure of compute for exploration)
Is this because:
You think that we’re >50% likely to not get AIs that dominate top human experts before 2040? (I’d be surprised if you thought this.)
The words “the feasibility of” importantly change the meaning of your claim in the first sentence? (I’m guessing it’s this based on the following parenthetical, but I’m having trouble parsing.)
Overall, it seems like you put substantially higher probability than I do on getting takeover capable AI without massive compute automation (and especially on getting a software-only singularity). I’d be very interested in understanding why. A brief outline of why this doesn’t seem that likely to me:
My read of the historical trend is that AI progress has come from scaling up all of the factors of production in tandem (hardware, algorithms, compute expenditure, etc.).
Scaling up hardware production has always been slower than scaling up algorithms, so this consideration is already factored into the historical trends. I don’t see a reason to believe that algorithms will start running away with the game.
Maybe you could counter-argue that algorithmic progress has only reflected returns to scale from AI being applied to AI research in the last 12-18 months and that the data from this period is consistent with algorithms becoming more relatively important relative to other factors?
I don’t see a reason that “takeover-capable” is a capability level at which algorithmic progress will be deviantly important relative to this historical trend.
I’d be interested either in hearing you respond to this sketch or in sketching out your reasoning from scratch.
I put roughly 50% probability on feasibility of software-only singularity.[1]
(I’m probably going to be reinventing a bunch of the compute-centric takeoff model in slightly different ways below, but I think it’s faster to partially reinvent than to dig up the material, and I probably do use a slightly different approach.)
My argument here will be a bit sloppy and might contain some errors. Sorry about this. I might be more careful in the future.
The key question for software-only singularity is: “When the rate of labor production is doubled (as in, as if your employees ran 2x faster[2]), does that more than double or less than double the rate of algorithmic progress? That is, algorithmic progress as measured by how fast we increase the labor production per FLOP/s (as in, the labor production from AI labor on a fixed compute base).”. This is a very economics-style way of analyzing the situation, and I think this is a pretty reasonable first guess. Here’s a diagram I’ve stolen from Tom’s presentation on explosive growth illustrating this:
Basically, every time you double the AI labor supply, does the time until the next doubling (driven by algorithmic progress) increase (fizzle) or decrease (foom)? I’m being a bit sloppy in saying “AI labor supply”. We care about a notion of parallelism-adjusted labor (faster laborers are better than more laborers) and quality increases can also matter. I’ll make the relevant notion more precise below.
I’m about to go into a relatively complicated argument for why I think the historical data supports software-only singularity. If you want more basic questions answered (such as “Doesn’t retraining make this too slow?”), consider looking at Tom’s presentation on takeoff speeds.
Here’s a diagram that you might find useful in understanding the inputs into AI progress:
And here is the relevant historical context in terms of trends:
Historically, algorithmic progress in LLMs looks like 3-4x per year including improvements on all parts of the stack.[3] This notion of algorithmic progress is “reduction in compute needed to reach a given level of frontier performance”, which isn’t equivalent to increases in the rate of labor production on a fixed compute base. I’ll talk more about this below.
This has been accompanied by increases of around 4x more hardware per year[4] and maybe 2x more quality-adjusted (parallel) labor working on LLM capabilities per year. I think total employees working on LLM capabilities have been roughly 3x-ing per year (in recent years), but quality has been decreasing over time.
A 2x increase in the quality-adjusted parallel labor force isn’t as good as the company getting the same sorts of labor tasks done 2x faster (as in, the resulting productivity from having your employees run 2x faster) due to parallelism tax (putting aside compute bottlenecks for now). I’ll apply the same R&D parallelization penalty as used in Tom’s takeoff model and adjust this down by a power of 0.7 to yield 20.7= 1.6x per year in increased labor production rate. (So, it’s as though the company keeps the same employees, but those employees operate 1.6x faster each year.)
It looks like the fraction of progress driven by algorithmic progress has been getting larger over time.
So, overall, we’re getting 3-4x algorithmic improvement per year being driven by 1.6x more labor per year and 4x more hardware.
So, the key question is how much of this algorithmic improvement is being driven by labor vs. by hardware. If it is basically all hardware, then the returns to labor must be relatively weak and software-only singularity seems unlikely. If it is basically all labor, then we’re seeing 3-4x algorithmic improvement per year for 1.6x more labor per year, which means the returns to labor look quite good (at least historically). Based on some guesses and some poll questions, my sense is that capabilities researchers would operate about 2.5x slower if they had 10x less compute (after adaptation), so the production function is probably proportional to compute0.4⋅labor0.6 (0.4=log10(2.5)). (This is assuming a cobb-douglas production function.) Edit: see the derivation of the relevant thing in Deep’s comment, my old thing was wrong[5].
Now, let’s talk more about the transfer from algorithmic improvement to the rate of labor production. A 2x algorithmic improvement in LLMs makes it so that you can reach the same (frontier) level of performance for 2x less training compute, but we actually care about a somewhat different notion for software-only singularity: how much you can increase the production rate of labor (the thing that we said was increasing at roughly a rate of 1.6x per year by using more human employees). My current guess is that every 2x algorithmic improvement in LLMs increases the rate of labor production by 21.1, and I’m reasonably confident that the exponent isn’t much below 1.0. I don’t currently have a very principled estimation strategy for this, and it’s somewhat complex to reason about. I discuss this in the appendix below.
So, if this exponent is around 1, our central estimate of 2.3 from above corresponds to software-only singularity and our estimate of 1.56 from above under more pessimistic assumptions corresponds to this not being feasible. Overall, my sense is that the best guess numbers lean toward software-only singularity.
More precisely, software-only singularity that goes for >500x effective compute gains above trend (to the extent this metric makes sense, this is roughly >5 years of algorithmic progress). Note that you can have software-only singularity be feasible while buying tons more hardware at the same time. And if this ends up expanding compute production by >10x using AI labor, then this would count as massive compute production despite also having a feasible software-only singularity. (However, in most worlds, I expect software-only singularity to be fast enough, if feasible, that we don’t see this.)
Rather than denominating labor in accelerating employees, we could instead denominate in number of parallel employees. This would work equivalently (we can always convert into equivalents to the extent these things can funge), but because we can actually accelerate employees and the serial vs. parallel distinction is important, I think it is useful to denominate in accelerating employees.
I would have previously cited 3x, but recent progress looks substantially faster (with DeepSeek v3 and reasoning models seemingly indicating somewhat faster than 3x progress IMO), so I’ve revised to 3-4x.
This includes both increased spending and improved chips. Here, I’m taking my best guess at increases in hardware usage for training and transferring this to research compute usage on the assumption that training compute and research compute have historically been proportional.
Edit: the reasoning I did here was off. Here was the old text: so the production function is probably roughly α⋅compute0.4⋅labor0.6 (0.4=log10(2.5)). Increasing compute by 4x and labor by 1.6x increases algorithmic improvement by 3-4x, let’s say 3.5x, so we have 3.5=α⋅40.4⋅1.60.6, α=3.540.4⋅1.60.6=1.52. Thus, doubling labor would increase algorithmic improvement by 1.52⋅20.6=2.3. This is very sensitive to the exact numbers; if we instead used 3x slower instead of 2.5x slower, we would have gotten that doubling labor increases algorithmic improvement by 1.56, which is substantially lower. Obviously, all the exact numbers here are highly uncertain.
Hey Ryan! Thanks for writing this up—I think this whole topic is important and interesting.
I was confused about how your analysis related to the Epoch paper, so I spent a while with Claude analyzing it. I did a re-analysis that finds similar results, but also finds (I think) some flaws in your rough estimate. (Keep in mind I’m not an expert myself, and I haven’t closely read the Epoch paper, so I might well be making conceptual errors. I think the math is right though!)
I’ll walk through my understanding of this stuff first, then compare to your post. I’ll be going a little slowly (A) to help myself refresh myself via referencing this later, (B) to make it easy to call out mistakes, and (C) to hopefully make this legible to others who want to follow along.
Using Ryan’s empirical estimates in the Epoch model
The Epoch model
The Epoch paper models growth with the following equation:
1. d(lnA)dt∼A−βEλ,
where A = efficiency and E = research input. We want to consider worlds with a potential software takeoff, meaning that increases in AI efficiency directly feed into research input, which we model as d(lnA)dt∼A−βAλ=Aλ−β. So the key consideration seems to be the ratio λβ. If it’s 1, we get steady exponential growth from scaling inputs; greater, superexponential; smaller, subexponential.[1]
Fitting the model
How can we learn about this ratio from historical data?
Let’s pretend history has been convenient and we’ve seen steady exponential growth in both variables, so A=A0ert and E=E0eqt. Then d(lnA)dthas been constant over time, so by equation 1, A(t)−βE(t)λ has been constant as well. Substituting in for A and E, we find that A0e−βrtE0eλqt is constant over time, which is only possible if βr=λq and the exponent is always zero. Thus if we’ve seen steady exponential growth, the historical value of our key ratio is:
2. λβ=rq.
Intuitively, if we’ve seen steady exponential growth while research input has increased more slowly than research output (AI efficiency), there are superlinear returns to scaling inputs.
Introducing the Cobb-Douglas function
But wait! E, research input, is an abstraction that we can’t directly measure. Really there’s both compute and labor inputs. Those have indeed been growing roughly exponentially, but at different rates.
Intuitively, it makes sense to say that “effective research input” has grown as some kind of weighted average of the rate of compute and labor input growth. This is my take on why a Cobb-Douglas function of form (3) E∼CpL1−p, with a weight parameter 0<p<1, is useful here: it’s a weighted geometric average of the two inputs, so its growth rate is a weighted average of their growth rates.
Writing that out: in general, say both inputs have grown exponentially, so C(t)=C0eqct and L(t)=L0eqlt. Then E has grown as E(t)=E0eqt=E0epqct+(1−p)qlt, so q is the weighted average (4) q=pqc+(1−p)ql of the growth rates of labor and capital.
Then, using Equation 2, we can estimate our key ratio λβ as rq=rpqc+(1−p)ql.
Let’s get empirical!
Plugging in your estimates:
Historical compute scaling of 4x/year gives qc=ln(4);
Historical labor scaling of 1.6x gives ql=ln(1.6);
Historical compute elasticity on research outputs of 0.4 gives p=0.4;
Adding these together, q=0.79∼ln(2.3).[2]
Historical efficiency improvement of 3.5x/year gives r=ln(3.5).
So λβ=ln(3.5)ln(2.3)=1.5 [3]
Adjusting for labor-only scaling
But wait: we’re not done yet! Under our Cobb-Douglas assumption, scaling labor by a factor of 2 isn’t as good as scaling all research inputs by a factor of 2; it’s only 20.6/2 as good.
Plugging in Equation 3 (which describes research input E in terms of compute and labor) to Equation 1 (which estimates AI progress A based on research), our adjusted form of the Epoch model is d(lnA)dt∼A−βEλ∼A−β∗Cpλ∗L(1−p)λ.
Under a software-only singularity, we hold compute constant while scaling labor with AI efficiency, so d(lnA)dt∼A(t)−β∗L(t)(1−p)λ multiplied by a fixed compute term. Since labor scales as A, we have d(lnA)dt=A−βtAλ(1−p)t=Aλ(1−p)t−βt. By the same analysis as in our first section, we can see A grows exponentially if λ(1−p)β=1, and grows grows superexponentially if this ratio is >1. So our key ratio λβ just gets multiplied by 1−p, and it wasn’t a waste to find it, phew!
Now we get the true form of our equation: we get a software-only foom iff λβ(1−p)>1, or (via equation 2) iff we see empirically that rq(1−p)>1. Call this the takeoff ratio: it corresponds to a) how much AI progress scales with inputs and b) how much of a penalty we take for not scaling compute.
Result: Above, we got λβ=1.5, so our takeoff ratio is 0.6∗1.5=.9. That’s quite close! If we think it’s more reasonable to think of a historical growth rate of 4 instead of 3.5, we’d increase our takeoff ratio by a factor of ln(4)ln(3.5)=1.1, to a ratio of .99, right on the knife edge of FOOM. [4] [note: I previously had the wrong numbers here: I had lambda/beta = 1.6, which would mean the 4x/year case has a takeoff ratio of 1.05, putting it into FOOM land]
So this isn’t too far off from your results in terms of implications, but it is somewhat different (no FOOM for 3.5x, less sensitivity to the exact historical growth rate).
Analyzing your approach:
Tweaking alpha:
Your estimate of α is in fact similar in form to my ratio—rqbut what you’re calculating instead is α=er/eq=3.5/(40.4∗1.60.6).
One indicator that something’s wrong is that your result involves checking whether α∗21−p>2, or equivalently whether ln(α)+(1−p)ln(2)>ln(2), or equivalently whether ln(α)>p∗ln(2). But the choice of 2 is arbitrary—conceptually, you just want to check if scaling software by a factor n increases outputs by a factor n or more. Yet ln(α)−p∗ln(n) clearly varies with n.
One way of parsing the problem is that alpha is (implicitly) time dependent—it is equal to exp(r * 1 year) / exp(q * 1 year), a ratio of progress vs inputs in the time period of a year. If you calculated alpha based on a different amount of time, you’d get a different value. By contrast, r/q is a ratio of rates, so it stays the same regardless of what timeframe you use to measure it.[5]
Maybe I’m confused about what your Cobb-Douglas function is meant to be calculating—is it E within an Epoch-style takeoff model, or something else?
Nuances:
Does Cobb-Douglas make sense?
The geometric average of rates thing makes sense, but it feels weird that that simple intuitive approach leads to a functional form (Cobb-Douglas) that also has other implications.
Wikipedia says Cobb-Douglas functions can have the exponents not add to 1 (while both being between 0 and 1). Maybe this makes sense here? Not an expert.
How seriously should we take all this?
This whole thing relies on...
Assuming smooth historical trends
Assuming those trends continue in the future
And those trends themselves are based on functional fits to rough / unclear data.
It feels like this sort of thing is better than nothing, but I wish we had something better.
I really like the various nuances you’re adjusting for, like parallel vs serial scaling, and especially distinguishing algorithmic improvement from labor efficiency. [6] Thinking those things through makes this stuff feel less insubstantial and approximate...though the error bars still feel quite large.
Actually there’s a complexity here, which is that scaling labor alone may be less efficient than scaling “research inputs” which include both labor and compute. We’ll come to this in a few paragraphs.
This is only coincidentally similar to your figure of 2.3 :)
I originally had 1.6 here, but as Ryan points out in a reply it’s actually 1.5. I’ve tried to reconstruct what I could have put into a calculator to get 1.6 instead, and I’m at a loss!
I was curious how aggressive the superexponential growth curve would be with a takeoff ratio of a mere0.96∗1.1=1.056. A couple of Claude queries gave me different answers (maybe because the growth is so extreme that different solvers give meaningfully different approximations?), but they agreed that growth is fairly slow in the first year (~5x) and then hits infinity by the end of the second year.I wrote this comment with the wrong numbers (0.96 instead of 0.9), so it doesn’t accurately represent what you get if you plug in 4x capability growth per year. Still cool to get a sense of what these curves look like, though.I think can be understood in terms of the alpha-being-implicitly-a-timescale-function thing—if you compare an alpha value with the ratio of growth you’re likely to see during the same time period, e.g. alpha(1 year) and n = one doubling, you probably get reasonable-looking results.
I find it annoying that people conflate “increased efficiency of doing known tasks” with “increased ability to do new useful tasks”. It seems to me that these could be importantly different, although it’s hard to even settle on a reasonable formalization of the latter. Some reasons this might be okay:
There’s a fuzzy conceptual boundary between the two: if GPT-n can do the task at 0.01% success rate, does that count as a “known task?” what about if it can do each of 10 components at 0.01% success, so in practice we’ll never see it succeed if run without human guidance, but we know it’s technically possible?
Under a software singularity situation, maybe the working hypothesis is that the model can do everything necessary to improve itself a bunch, maybe just not very efficiently yet. So we only need efficiency growth, not to increase the task set. That seems like a stronger assumption than most make, but maybe a reasonable weaker assumption is that the model will ‘unlock’ the necessary new tasks over time, after which point they become subject to rapid efficiency growth.
And empirically, we have in fact seen rapid unlocking of new capabilities, so it’s not crazy to approximate “being able to do new things” as a minor but manageable slowdown to the process of AI replacing human AI R&D labor.
I think you are correct with respect to my estimate of α and the associated model I was using. Sorry about my error here. I think I was fundamentally confusing a few things in my head when writing out the comment.
I think your refactoring of my strategy is correct and I tried to check it myself, though I don’t feel confident in verifying it is correct.
Your estimate doesn’t account for the conversion between algorithmic improvement and labor efficiency, but it is easy to add this in by just changing the historical algorithmic efficiency improvement of 3.5x/year to instead be the adjusted effective labor efficiency rate and then solving identically. I was previously thinking the relationship was that labor efficiency was around the same as algorithmic efficiency, but I now think this is more likely to be around algo_efficiency2 based on Tom’s comment.
Plugging this is, we’d get:
λβ(1−p)=rq(1−p)=ln(3.52)0.4ln(4)+0.6ln(1.6)(1−0.4)=2ln(3.5)ln(2.3)(1−0.4)=2⋅1.5⋅0.6=1.8
(In your comment you said ln(3.5)ln(2.3)=1.6, but I think the arithmetic is a bit off here and the answer is closer to 1.5.)
Neat, thanks a ton for the algorithmic-vs-labor update—I appreciated that you’d distinguished those in your post, but I forgot to carry that through in mine! :)
And oops, I really don’t know how I got to 1.6 instead of 1.5 there. Thanks for the flag, have updated my comment accordingly!
The square relationship idea is interesting—that factor of 2 is a huge deal. Would be neat to see a Guesstimate or Squiggle version of this calculation that tries to account for the various nuances Tom mentions, and has error bars on each of the terms, so we both get a distribution of r and a sensitivity analysis. (Maybe @Tom Davidson already has this somewhere? If not I might try to make a crappy version myself, or poke talented folks I know to do a good version :)
The existing epoch paper is pretty good, but doesn’t directly target LLMs in a way which seems somewhat sad.
The thing I’d be most excited about is:
Epoch does an in depth investigation using an estimation methodology which is directly targeting LLMs (rather than looking at returns in some other domains).
They use public data and solicit data from companies about algorithmic improvement, head count, compute on experiments etc.
(Some) companies provide this data. Epoch potentially doesn’t publish this exact data and instead just publishes the results of the final analysis to reduce capabilities externalities. (IMO, companies are somewhat unlikely to do this, but I’d like to be proven wrong!)
(I’m going through this and understanding where I made an error with my approach to α. I think I did make an error, but I’m trying to make sure I’m not still confused. Edit: I’ve figured this out, see my other comment.)
It shouldn’t matter in this case because we’re raising the whole value of E to λ.
Here’s my own estimate for this parameter:
Once AI has automated AI R&D, will software progress become faster or slower over time? This depends on the extent to which software improvements get harder to find as software improves – the steepness of the diminishing returns.
We can ask the following crucial empirical question:
When (cumulative) cognitive research inputs double, how many times does software double?
(In growth models of a software intelligence explosion, the answer to this empirical question is a parameter called r.)
If the answer is “< 1”, then software progress will slow down over time. If the answer is “1”, software progress will remain at the same exponential rate. If the answer is “>1”, software progress will speed up over time.
The bolded question can be studied empirically, by looking at how many times software has doubled each time the human researcher population has doubled.
(What does it mean for “software” to double? A simple way of thinking about this is that software doubles when you can run twice as many copies of your AI with the same compute. But software improvements don’t just improve runtime efficiency: they also improve capabilities. To incorporate these improvements, we’ll ultimately need to make some speculative assumptions about how to translate capability improvements into an equivalently-useful runtime efficiency improvement..)
The best quality data on this question is Epoch’s analysis of computer vision training efficiency. They estimate r = ~1.4: every time the researcher population doubled, training efficiency doubled 1.4 times. (Epoch’s preliminary analysis indicates that the r value for LLMs would likely be somewhat higher.) We can use this as a starting point, and then make various adjustments:
Upwards for improving capabilities. Improving training efficiency improves capabilities, as you can train a model with more “effective compute”. To quantify this effect, imagine we use a 2X training efficiency gain to train a model with twice as much “effective compute”. How many times would that double “software”? (I.e., how many doublings of runtime efficiency would have the same effect?) There are various sources of evidence on how much capabilities improve every time training efficiency doubles: toy ML experiments suggest the answer is ~1.7; human productivity studies suggest the answer is ~2.5. I put more weight on the former, so I’ll estimate 2. This doubles my median estimate to r = ~2.8 (= 1.4 * 2).
Upwards for post-training enhancements. So far, we’ve only considered pre-training improvements. But post-training enhancements like fine-tuning, scaffolding, and prompting also improve capabilities (o1 was developed using such techniques!). It’s hard to say how large an increase we’ll get from post-training enhancements. These can allow faster thinking, which could be a big factor. But there might also be strong diminishing returns to post-training enhancements holding base models fixed. I’ll estimate a 1-2X increase, and adjust my median estimate to r = ~4 (2.8*1.45=4).
Downwards for less growth in compute for experiments. Today, rising compute means we can run increasing numbers of GPT-3-sized experiments each year. This helps drive software progress. But compute won’t be growing in our scenario. That might mean that returns to additional cognitive labour diminish more steeply. On the other hand, the most important experiments are ones that use similar amounts of compute to training a SOTA model. Rising compute hasn’t actually increased the number of these experiments we can run, as rising compute increases the training compute for SOTA models. And in any case, this doesn’t affect post-training enhancements. But this still reduces my median estimate down to r = ~3. (See Eth (forthcoming) for more discussion.)
Downwards for fixed scale of hardware. In recent years, the scale of hardware available to researchers has increased massively. Researchers could invent new algorithms that only work at the new hardware scales for which no one had previously tried to to develop algorithms. Researchers may have been plucking low-hanging fruit for each new scale of hardware. But in the software intelligence explosions I’m considering, this won’t be possible because the hardware scale will be fixed. OAI estimate ImageNet efficiency via a method that accounts for this (by focussing on a fixed capability level), and find a 16-month doubling time, as compared with Epoch’s 9-month doubling time. This reduces my estimate down to r = ~1.7 (3 * 9⁄16).
Downwards for diminishing returns becoming steeper over time. In most fields, returns diminish more steeply than in software R&D. So perhaps software will tend to become more like the average field over time. To estimate the size of this effect, we can take our estimate that software is ~10 OOMs from physical limits (discussed below), and assume that for each OOM increase in software, r falls by a constant amount, reaching zero once physical limits are reached. If r = 1.7, then this implies that r reduces by 0.17 for each OOM. Epoch estimates that pre-training algorithmic improvements are growing by an OOM every ~2 years, which would imply a reduction in r of 1.02 (6*0.17) by 2030. But when we include post-training enhancements, the decrease will be smaller (as [reason], perhaps ~0.5. This reduces my median estimate to r = ~1.2 (1.7-0.5).
Overall, my median estimate of r is 1.2. I use a log-uniform distribution with the bounds 3X higher and lower (0.4 to 3.6).
My sense is that I start with a higher r value due to the LLM case looking faster (and not feeling the need to adjust downward in a few places like you do in the LLM case). Obviously the numbers in the LLM case are much less certain given that I’m guessing based on qualitative improvement and looking at some open source models, but being closer to what we actually care about maybe overwhelms this.
I also think I’d get a slightly lower update on the diminishing returns case due to thinking it has a good chance of having substantially sharper dimishing returns as you get closer and closer rather than having linearly decreasing r (based on some first principles reasoning and my understanding of how returns diminished in the semi-conductor case).
But the biggest delta is that I think I wasn’t pricing in the importance of increasing capabilities. (Which seems especially important if you apply a large R&D parallelization penalty.)
Sorry,I don’t follow why they’re less certain?
I’d be interested to hear more about this. The semi conductor case is hard as we don’t know how far we are from limits, but if we use Landauer’s limit then I’d guess you’re right. There’s also uncertainty about how much alg progress we will and have met
I’m just eyeballing the rate of algorithmic progress while in the computer vision case, we can at least look at benchmarks and know the cost of training compute for various models.
My sense is that you have generalization issues in the compute vision case while in the frontier LLM case you have issues with knowing the actual numbers (in terms of number of employees and cost of training runs). I’m also just not carefully doing the accounting.
I don’t have much to say here sadly, but I do think investigating this could be useful.
Really appreciate you covering all these nuances, thanks Tom!
Can you give a pointer to the studies you mentioned here?
Sure! See here: https://docs.google.com/document/d/1DZy1qgSal2xwDRR0wOPBroYE_RDV1_2vvhwVz4dxCVc/edit?tab=t.0#bookmark=id.eqgufka8idwl
Here’s a simple argument I’d be keen to get your thoughts on:
On the Possibility of a Tastularity
Research taste is the collection of skills including experiment ideation, literature review, experiment analysis, etc. that collectively determine how much you learn per experiment on average (perhaps alongside another factor accounting for inherent problem difficulty / domain difficulty, of course, and diminishing returns)
Human researchers seem to vary quite a bit in research taste—specifically, the difference between 90th percentile professional human researchers and the very best seems like maybe an order of magnitude? Depends on the field, etc. And the tails are heavy; there is no sign of the distribution bumping up against any limits.
Yet the causes of these differences are minor! Take the very best human researchers compared to the 90th percentile. They’ll have almost the same brain size, almost the same amount of experience, almost the same genes, etc. in the grand scale of things.
This means we should assume that if the human population were massively bigger, e.g. trillions of times bigger, there would be humans whose brains don’t look that different from the brains of the best researchers on Earth, and yet who are an OOM or more above the best Earthly scientists in research taste. -- AND it suggests that in the space of possible mind-designs, there should be minds which are e.g. within 3 OOMs of those brains in every dimension of interest, and which are significantly better still in the dimension of research taste. (How much better? Really hard to say. But it would be surprising if it was only, say, 1 OOM better, because that would imply that human brains are running up against the inherent limits of research taste within a 3-OOM mind design space, despite human evolution having only explored a tiny subspace of that space, and despite the human distribution showing no signs of bumping up against any inherent limits)
OK, so what? So, it seems like there’s plenty of room to improve research taste beyond human level. And research taste translates pretty directly into overall R&D speed, because it’s about how much experimentation you need to do to achieve a given amount of progress. With enough research taste, you don’t need to do experiments at all—or rather, you look at the experiments that have already been done, and you infer from them all you need to know to build the next design or whatever.
Anyhow, tying this back to your framework: What if the diminishing returns / increasing problem difficulty / etc. dynamics are such that, if you start from a top-human-expert-level automated researcher, and then do additional AI research to double its research taste, and then do additional AI research to double its research taste again, etc. the second doubling happens in less time than it took to get to the first doubling? Then you get a singularity in research taste (until these conditions change of course) -- the Tastularity.
How likely is the Tastularity? Well, again one piece of evidence here is the absurdly tiny differences between humans that translate to huge differences in research taste, and the heavy-tailed distribution. This suggests that we are far from any inherent limits on research taste even for brains roughly the shape and size and architecture of humans, and presumably the limits for a more relaxed (e.g. 3 OOM radius in dimensions like size, experience, architecture) space in mind-design are even farther away. It similarly suggests that there should be lots of hill-climbing that can be done to iteratively improve research taste.
How does this relate to software-singularity? Well, research taste is just one component of algorithmic progress; there is also speed, # of parallel copies & how well they coordinate, and maybe various other skills besides such as coding ability. So even if the Tastularity isn’t possible, improvements in taste will stack with improvements in those other areas, and the sum might cross the critical threshold.
In my framework, this is basically an argument that algorithmic-improvement-juice can be translated into a large improvement in AI R&D labor production via the mechanism of greatly increasing the productivity per “token” (or unit of thinking compute or whatever). See my breakdown here where I try to convert from historical algorithmic improvement to making AIs better at producing AI R&D research.
Your argument is basically that this taste mechanism might have higher returns than reducing cost to run more copies.
I agree this sort of argument means that returns to algorithmic improvement on AI R&D labor production might be bigger than you would otherwise think. This is both because this mechanism might be more promising than other mechanisms and even if it is somewhat less promising, diverse approaches make returns dimish less aggressively. (In my model, this means that best guess conversion might be more like algo_improvement1.3 rather than algo_improvement1.0.)
I think it might be somewhat tricky to train AIs to have very good research taste, but this doesn’t seem that hard via training them on various prediction objectives.
At a more basic level, I expect that training AIs to predict the results of experiments and then running experiments based on value of information as estimated partially based on these predictions (and skipping experiments with certain results and more generally using these predictions to figure out what to do) seems pretty promising. It’s really hard to train humans to predict the results of tens of thousands of experiments (both small and large), but this is relatively clean outcomes based feedback for AIs.
I don’t really have a strong inside view on how much the “AI R&D research taste” mechanism increases the returns to algorithmic progress.
I’ll paste my own estimate for this param in a different reply.
But here are the places I most differ from you:
Bigger adjustment for ‘smarter AI’. You’ve argue in your appendix that, only including ‘more efficient’ and ‘faster’ AI, you think the software-only singularity goes through. I think including ‘smarter’ AI makes a big difference. This evidence suggests that doubling training FLOP doubles output-per-FLOP 1-2 times. In addition, algorithmic improvements will improve runtime efficiency. So overall I think a doubling of algorithms yields ~two doublings of (parallel) cognitive labour.
--> software singularity more likely
Lower lambda. I’d now use more like lambda = 0.4 as my median. There’s really not much evidence pinning this down; I think Tamay Besiroglu thinks there’s some evidence for values as low as 0.2. This will decrease the observed historical increase in human workers more than it decreases the gains from algorithmic progress (bc of speed improvements)
--> software singularity slightly more likely
Complications thinking about compute which might be a wash.
Number of useful-experiments has increased by less than 4X/year. You say compute inputs have been increasing at 4X. But simultaneously the scale of experiments ppl must run to be near to the frontier has increased by a similar amount. So the number of near-frontier experiments has not increased at all.
This argument would be right if the ‘usefulness’ of an experiment depends solely on how much compute it uses compared to training a frontier model. I.e. experiment_usefulness = log(experiment_compute / frontier_model_training_compute). The 4X/year increases the numerator and denominator of the expression, so there’s no change in usefulness-weighted experiments.
That might be false. GPT-2-sized experiments might in some ways be equally useful even as frontier model size increases. Maybe a better expression would be experiment_usefulness = alpha * log(experiment_compute / frontier_model_training_compute) + beta * log(experiment_compute). In this case, the number of usefulness-weighted experiments has increased due to the second term.
--> software singularity slightly more likely
Steeper diminishing returns during software singularity. Recent algorithmic progress has grabbed low-hanging fruit from new hardware scales. During a software-only singularity that won’t be possible. You’ll have to keep finding new improvements on the same hardware scale. Returns might diminish more quickly as a result.
--> software singularity slightly less likely
Compute share might increase as it becomes scarce. You estimate a share of 0.4 for compute, which seems reasonable. But it might fall over time as compute becomes a bottleneck. As an intuition pump, if your workers could think 1e10 times faster, you’d be fully constrained on the margin by the need for more compute: more labour wouldn’t help at all but more compute could be fully utilised so the compute share would be ~1.
--> software singularity slightly less likely
--> overall these compute adjustments prob make me more pessimistic about the software singularity, compared to your assumptions
Taking it all together, i think you should put more probability on the software-only singluarity, mostly because of capability improvements being much more significant than you assume.
Yep, I think my estimates were too low based on these considerations and I’ve updated up accordingly. I updated down on your argument that maybe r decreases linearly as you approach optimal efficiency. (I think it probably doesn’t decrease linearly and instead drops faster towards the end based partially on thinking a bit about the dynamics and drawing on the example of what we’ve seen in semi-conductor improvement over time, but I’m not that confident.) Maybe I’m now at like 60% software-only is feasible given these arguments.
Isn’t this really implausible? This implies that if you had 1000 researchers/engineers of average skill at OpenAI doing AI R&D, this would be as good as having one average skill researcher running at 16x (10000.4) speed. It does seem very slightly plausible that having someone as good as the best researcher/engineer at OpenAI run at 16x speed would be competitive with OpenAI, but that isn’t what this term is computing. 0.2 is even more crazy, implying that 1000 researchers/engineers is as good as one researcher/engineer running at 4x speed!
I think 0.4 is far on the lower end (maybe 15th percentile) for all the way down to one accelerated researcher, but seems pretty plausible at the margin.
As in, 0.4 suggests that 1000 researchers = 100 researchers at 2.5x speed which seems kinda reasonable while 1000 researchers = 1 researcher at 16x speed does seem kinda crazy / implausible.
So, I think my current median lambda at likely margins is like 0.55 or something and 0.4 is also pretty plausible at the margin.
Ok, I think what is going on here is maybe that the constant you’re discussing here is different from the constant I was discussing. I was trying to discuss the question of how much worse serial labor is than parallel labor, but I think the lambda you’re talking about takes into account compute bottlenecks and similar?
Not totally sure.
I’m confused — I thought you put significantly less probability on software-only singularity than Ryan does? (Like half?) Maybe you were using a different bound for the number of OOMs of improvement?
Sorry, for my comments on this post I’ve been referring to “software only singularity?” only as “will the parameter r >1 when we f first fully automate AI RnD”, not as a threshold for some number of OOMs. That’s what Ryan’s analysis seemed to be referring to.
I separately think that even if initially r>1 the software explosion might not go on for that long
I’ll post about my views on different numbers of OOMs soon
I think Tom’s take is that he expects I will put more probability on software only singularity after updating on these considerations. It seems hard to isolate where Tom and I disagree based on this comment, but maybe it is on how much to weigh various considerations about compute being a key input.
Appendix: Estimating the relationship between algorithmic improvement and labor production
In particular, if we fix the architecture to use a token abstraction and consider training a new improved model: we care about how much cheaper you make generating tokens at a given level of performance (in inference tok/flop), how much serially faster you make generating tokens at a given level of performance (in serial speed: tok/s at a fixed level of tok/flop), and how much more performance you can get out of tokens (labor/tok, really per serial token). Then, for a given new model with reduced cost, increased speed, and increased production per token and assuming a parallelism penalty of 0.7, we can compute the increase in production as roughly: cost_reduction0.7⋅speed_increase(1−0.7)⋅productivity_multiplier[1] (I can show the math for this if there is interest).
My sense is that reducing inference compute needed for a fixed level of capability that you already have (using a fixed amount of training run) is usually somewhat easier than making frontier compute go further by some factor, though I don’t think it is easy to straightforwardly determine how much easier this is[2]. Let’s say there is a 1.25 exponent on reducing cost (as in, 2x algorithmic efficiency improvement is as hard as a 21.25=2.38 reduction in cost)? (I’m also generally pretty confused about what the exponent should be. I think exponents from 0.5 to 2 seem plausible, though I’m pretty confused. 0.5 would correspond to the square root from just scaling data in scaling laws.) It seems substantially harder to increase speed than to reduce cost as speed is substantially constrained by serial depth, at least when naively applying transformers. Naively, reducing cost by β (which implies reducing parameters by β) will increase speed by somewhat more than β1/3 as depth is cubic in layers. I expect you can do somewhat better than this because reduced matrix sizes also increase speed (it isn’t just depth) and because you can introduce speed-specific improvements (that just improve speed and not cost). But this factor might be pretty small, so let’s stick with 13 for now and ignore speed-specific improvements. Now, let’s consider the case where we don’t have productivity multipliers (which is strictly more conservative). Then, we get that increase in labor production is:
cost_reduction0.7⋅cost_reduction1/3⋅(1−0.7)=cost_reduction0.8=algo_improvement1.25⋅0.8=algo_improvement1
So, these numbers ended up yielding an exact equivalence between frontier algorithmic improvement and effective labor production increases. (This is a coincidence, though I do think the exponent is close to 1.)
In practice, we’ll be able to get slightly better returns by spending some of our resources investing in speed-specific improvements and in improving productivity rather than in reducing cost. I don’t currently have a principled way to estimate this (though I expect something roughly principled can be found by looking at trading off inference compute and training compute), but maybe I think this improves the returns to around algo_improvement1.1. If the coefficient on reducing cost was much worse, we would invest more in improving productivity per token, which bounds the returns somewhat.
Appendix: Isn’t compute tiny and decreasing per researcher?
One relevant objection is: Ok, but is this really feasible? Wouldn’t this imply that each AI researcher has only a tiny amount of compute? After all, if you use 20% of compute for inference of AI research labor, then each AI only gets 4x more compute to run experiments than for inference on itself? And, as you do algorithmic improvement to reduce AI cost and run more AIs, you also reduce the compute per AI! First, it is worth noting that as we do algorithmic progress, both the cost of AI researcher inference and the cost of experiments on models of a given level of capability go down. Precisely, for any experiment that involves a fixed number of inference or gradient steps on a model which is some fixed effective compute multiplier below/above the performance of our AI laborers, cost is proportional to inference cost (so, as we improve our AI workforce, experiment cost drops proportionally). However, for experiments that involve training a model from scratch, I expect the reduction in experiment cost to be relatively smaller such that such experiments must become increasingly small relative to frontier scale. Overall, it might be important to mostly depend on approaches which allow for experiments that don’t require training runs from scratch or to adapt to increasingly smaller full experiment training runs. To the extent AIs are made smarter rather than more numerous, this isn’t a concern. Additionally, we only need so many orders of magnitude of growth. In principle, this consideration should be captured by the exponents in the compute vs. labor production function, but it is possible this production function has very different characteristics in the extremes. Overall, I do think this concern is somewhat important, but I don’t think it is a dealbreaker for a substantial number of OOMs of growth.
Appendix: Can’t algorithmic efficiency only get so high?
My sense is that this isn’t very close to being a blocker. Here is a quick bullet point argument (from some slides I made) that takeover-capable AI is possible on current hardware.
Human brain is perhaps ~1e14 FLOP/s
With that efficiency, each H100 can run 10 humans (current cost $2 / hour)
10s of millions of human-level AIs with just current hardware production
Human brain is probably very suboptimal:
AIs already much better at many subtasks
Possible to do much more training than within lifetime training with parallelism
Biological issues: locality, noise, focused on sensory processing, memory limits
Smarter AI could be more efficient (smarter humans use less FLOP per task)
AI could be 1e2-1e7 more efficient on tasks like coding, engineering
Probably smaller improvement on video processing
Say, 1e4 so 100,000 per H100
Qualitative intelligence could be a big deal
Seems like peak efficiency isn’t a blocker.
This is just approximate because you can also trade off speed with cost in complicated ways and research new ways to more efficiently trade off speed and cost. I’ll be ignoring this for now.
It’s hard to determine because inference cost reductions have been driven by spending more compute on making smaller models e.g., training a smaller model for longer rather than just being driven by algorithmic improvement, and I don’t have great numbers on the difference off the top of my head.
Interesting comparison point: Tom thought this would give a way larger boost in his old software-only singularity appendix.
When considering an “efficiency only singularity”, some different estimates gets him r~=1; r~=1.5; r~=1.6. (Where r is defined so that “for each x% increase in cumulative R&D inputs, the output metric will increase by r*x”. The condition for increasing returns is r>1.)
Whereas when including capability improvements:
Though note that later in the appendix he adjusts down from 85% to 65% due to some further considerations. Also, last I heard, Tom was more like 25% on software singularity. (ETA: Or maybe not? See other comments in this thread.)
Interesting. My numbers aren’t very principled and I could imagine thinking capability improvements are a big deal for the bottom line.
Can you say roughly who the people surveyed were? (And if this was their raw guess or if you’ve modified it.)
I saw some polls from Daniel previously where I wasn’t sold that they were surveying people working on the most important capability improvements, so wondering if these are better.
Also, somewhat minor, but: I’m slightly concerned that surveys will overweight areas where labor is more useful relative to compute (because those areas should have disproportionately many humans working on them) and therefore be somewhat biased in the direction of labor being important.
I’m citing the polls from Daniel + what I’ve heard from random people + my guesses.
Ryan discusses this at more length in his 80K podcast.
I think your outline of an argument against contains an important error.
Importantly, while the spending on hardware for individual AI companies has increased by roughly 3-4x each year[1], this has not been driven by scaling up hardware production by 3-4x per year. Instead, total compute production (in terms of spending, building more fabs, etc.) has been increased by a much smaller amount each year, but a higher and higher fraction of that compute production was used for AI. In particular, my understanding is that roughly ~20% of TSMC’s volume is now AI while it used to be much lower. So, the fact that scaling up hardware production is much slower than scaling up algorithms hasn’t bitten yet and this isn’t factored into the historical trends.
Another way to put this is that the exact current regime can’t go on. If trends continue, then >100% of TSMC’s volume will be used for AI by 2027!
Only if building takeover-capable AIs happens by scaling up TSMC to >1000% of what their potential FLOP output volume would have otherwise been, then does this count as “massive compute automation” in my operationalization. (And without such a large build-out, the economic impacts and dependency of the hardware supply chain (at the critical points) could be relatively small.) So, massive compute automation requires something substantially off trend from TSMC’s perspective.
[Low importance] It is only possible to build takeover-capable AI without previously breaking an important trend prior to around 2030 (based on my rough understanding). Either the hardware spending trend must break or TSMC production must go substantially above the trend by then. If takeover-capable AI is built prior to 2030, it could occur without substantial trend breaks but this gets somewhat crazy towards the end of the timeline: hardware spending keeps increasing at ~3x for each actor (but there is some consolidation and acquisition of previously produced hardware yielding a one-time increase up to about 10x which buys another 2 years for this trend), algorithmic progress remains steady at ~3-4x per year, TSMC expands production somewhat faster than previously, but not substantially above trend, and these suffice for getting sufficiently powerful AI. In this scenario, this wouldn’t count as massive compute automation.
The spending on training runs has increased by 4-5x according to epoch, but part of this is making training runs go longer, which means the story for overall spending is more complex. We care about the overall spend on hardware, not just the spend on training runs.
Thanks, this is helpful. So it sounds like you expect
AI progress which is slower than the historical trendline (though perhaps fast in absolute terms) because we’ll soon have finished eating through the hardware overhang
separately, takeover-capable AI soon (i.e. before hardware manufacturers have had a chance to scale substantially).
It seems like all the action is taking place in (2). Even if (1) is wrong (i.e. even if we see substantially increased hardware production soon), that makes takeover-capable AI happen faster than expected; IIUC, this contradicts the OP, which seems to expect takeover-capable AI to happen later if it’s preceded by substantial hardware scaling.
In other words, it seems like in the OP you care about whether takeover-capable AI will be preceded by massive compute automation because:
[this point still holds up] this affects how legible it is that AI is a transformative technology
[it’s not clear to me this point holds up] takeover-capable AI being preceded by compute automation probably means longer timelines
The second point doesn’t clearly hold up because if we don’t see massive compute automation, this suggests that AI progress slower than the historical trend.
I don’t think (2) is a crux (as discussed in person). I expect that if takeover-capable AI takes a while (e.g. it happens in 2040), then we will have a long winter where economic value from AI doesn’t increase that fast followed a period of faster progress around 2040. If progress is relatively stable accross this entire period, then we’ll have enough time to scale up fabs. Even if progress isn’t stable, we could see enough total value from AI in the slower growth period to scale up to scale up fabs by 10x, but this would require >>$1 trillion of economic value per year I think (which IMO seems not that likely to come far before takeover-capable AI due to views about economic returns to AI and returns to scaling up compute).
I think this happening in practice is about 60% likely, so I don’t think feasibility vs. in practice is a huge delta.