Does a need for broad automation really place a speed limit on economic growth?
I’ve been trying to better understand the assumptions behind people’s differing predictions of economic growth from AI, and what we can monitor — investment? employment? interest rates? — to narrow down what is actually happening.
I’m not an economist; I am an engineer who implements AI systems. The reason I want to understand the potential impact of AI is because it’s going to matter for my own career and for everyone I know.
In the spirit of “learning in public”, I’ll share what I’ve learned (which is a little) and what’s not making sense to me (which is a lot).
In Ege Erdil’s recent case for multi-decade AI timelines, he gives the following intuition for why a “software-only singularity” is unlikely:
The case for AI revenue growth not slowing down at all, or perhaps even accelerating, rests on the feedback loops that would be enabled by human-level or superhuman AI systems: short timelines advocates usually emphasize software R&D feedbacks more, while I think the relevant feedback loops are more based on broad automation and reinvestment of output into capital accumulation, chip production, productivity improvements, et cetera.
The implicit assumption is that chip production, energy buildouts, and general physical capital accumulation can only go so fast.
Certainly, today’s physical capital stock took a long time to accumulate. With today’s technology, it’s not feasible to scale physical capital formation by 10x, or to 10x the capital stock in 10 years; it would simply be far too expensive.
But economics does not place any theoretical limit on the productivity of new vintages of capital goods. If tomorrow’s technology was far more effective at producing capital goods, physical capital could grow at an unprecedented rate.
(In frontier economies today, the speed of physical capital growth is well below historical records. For reference, South Korea’s physical capital stock grew at ~13.2% per year at the peak of its growth miracle. And from 2004 to 2007, China’s electricity production grew at an average of ~14.2% per year.)
Today’s physical capital has two unintuitive properties that creates a potential for it to be produced at much lower cost, despite the intuition that “physical = slow to build”.
As income increases, consumption is “dematerialized”. Spending on physical capital increasingly goes toward high-value-to-weight items, such as semiconductors and medical devices. Explosive economic growth may not mainly be about building, say, 10 houses and airplanes per human (how would this even get used?) but rather building increasingly elaborate hospitals, medical equipment, and so on.
The barrier to creating these goods may lie more with R&D and design and coordination, rather than physical throughput limits. Some physical work is still required, but physical throughput is not necessarily the bottleneck.
High-value physical capital embeds a great deal of skilled labor. Capital equipment is produced using factors of production that can be recursively attributed to non-capital inputs — labor, energy, and natural resources. This is reflected, for instance, in BEA’s industry-level “KLEMS” (capital, labor, energy, materials, and services) accounts. The supply chain of a complex capital good generally has skilled labor as a major, or even dominant, portion of the value added in its production.
Example: MRI machines. Medical MRI machines are very expensive, often costing more than $1 million per unit. I asked o3 to recursively break down the cost; I’ll keep the inferences high-level, since I don’t trust o3’s precise claims. Essentially, a large portion of the cost of an MRI machine is because the superconducting magnet requires a great deal of specialized equipment to make, and the equipment is complex and produced at low volume. The reason why low-volume capital goods are expensive is that they embody considerable skilled labor amortized over few units.
Physical constraints on the replication rate of capital goods do exist, but in at least one major case are far from binding. Solar panels, for example, require ~1 year to pay back their energy cost. If solar panels were the only source of electricity, GDP could not grow at >100% per year while remaining equally electricity intensive. But this is far, far, above any predicted rate of economic growth, even “explosive” growth, which Davidson and Erdil and Besiroglu define to start at 30% per year.
I see a strong possibility that if human-level skilled labor were free, the capital stock could grow at an unprecedented rate.
The idea of human-level skilled labor being completely free is an alluring one, but one that may never be completely true so long as humans maintain agency (likely in my optimistic worldview).
Even if production processes were completely automated, making value judgements about the usefulness of the outputs was useful and where to go next is something that humans will probably want to continue to maintain some level of control over where material resource costs and timeliness still matter (i.e. feedback loops as you mention). The work involved in this decision-making process could be more than many people assume.
As discussed in our other thread, modeling how much time is required to define requirements/prompt and evaluate output will be an important component of forecasting how far and fast AI advancements might take us. Realistic estimates of this will likely support your hypothesis of the bottlenecks being in R&D and design and coordination, rather than physical throughput limits.
Does a need for broad automation really place a speed limit on economic growth?
I’ve been trying to better understand the assumptions behind people’s differing predictions of economic growth from AI, and what we can monitor — investment? employment? interest rates? — to narrow down what is actually happening.
I’m not an economist; I am an engineer who implements AI systems. The reason I want to understand the potential impact of AI is because it’s going to matter for my own career and for everyone I know.
In the spirit of “learning in public”, I’ll share what I’ve learned (which is a little) and what’s not making sense to me (which is a lot).
In Ege Erdil’s recent case for multi-decade AI timelines, he gives the following intuition for why a “software-only singularity” is unlikely:
The implicit assumption is that chip production, energy buildouts, and general physical capital accumulation can only go so fast.
Certainly, today’s physical capital stock took a long time to accumulate. With today’s technology, it’s not feasible to scale physical capital formation by 10x, or to 10x the capital stock in 10 years; it would simply be far too expensive.
But economics does not place any theoretical limit on the productivity of new vintages of capital goods. If tomorrow’s technology was far more effective at producing capital goods, physical capital could grow at an unprecedented rate.
(In frontier economies today, the speed of physical capital growth is well below historical records. For reference, South Korea’s physical capital stock grew at ~13.2% per year at the peak of its growth miracle. And from 2004 to 2007, China’s electricity production grew at an average of ~14.2% per year.)
Today’s physical capital has two unintuitive properties that creates a potential for it to be produced at much lower cost, despite the intuition that “physical = slow to build”.
As income increases, consumption is “dematerialized”. Spending on physical capital increasingly goes toward high-value-to-weight items, such as semiconductors and medical devices. Explosive economic growth may not mainly be about building, say, 10 houses and airplanes per human (how would this even get used?) but rather building increasingly elaborate hospitals, medical equipment, and so on.
The barrier to creating these goods may lie more with R&D and design and coordination, rather than physical throughput limits. Some physical work is still required, but physical throughput is not necessarily the bottleneck.
High-value physical capital embeds a great deal of skilled labor. Capital equipment is produced using factors of production that can be recursively attributed to non-capital inputs — labor, energy, and natural resources. This is reflected, for instance, in BEA’s industry-level “KLEMS” (capital, labor, energy, materials, and services) accounts. The supply chain of a complex capital good generally has skilled labor as a major, or even dominant, portion of the value added in its production.
Example: MRI machines. Medical MRI machines are very expensive, often costing more than $1 million per unit. I asked o3 to recursively break down the cost; I’ll keep the inferences high-level, since I don’t trust o3’s precise claims. Essentially, a large portion of the cost of an MRI machine is because the superconducting magnet requires a great deal of specialized equipment to make, and the equipment is complex and produced at low volume. The reason why low-volume capital goods are expensive is that they embody considerable skilled labor amortized over few units.
Physical constraints on the replication rate of capital goods do exist, but in at least one major case are far from binding. Solar panels, for example, require ~1 year to pay back their energy cost. If solar panels were the only source of electricity, GDP could not grow at >100% per year while remaining equally electricity intensive. But this is far, far, above any predicted rate of economic growth, even “explosive” growth, which Davidson and Erdil and Besiroglu define to start at 30% per year.
I see a strong possibility that if human-level skilled labor were free, the capital stock could grow at an unprecedented rate.
The idea of human-level skilled labor being completely free is an alluring one, but one that may never be completely true so long as humans maintain agency (likely in my optimistic worldview).
Even if production processes were completely automated, making value judgements about the usefulness of the outputs was useful and where to go next is something that humans will probably want to continue to maintain some level of control over where material resource costs and timeliness still matter (i.e. feedback loops as you mention). The work involved in this decision-making process could be more than many people assume.
A good example of this is how Google claims that 30% of their code is AI-generated, but coding velocity has only increased by 10%. Deciding what work to pursue and evaluating output, particularly in an industry where the outputs are so specialized, is already a substantial percentage of labor that hasn’t been automated to the same degree as coding.
As discussed in our other thread, modeling how much time is required to define requirements/prompt and evaluate output will be an important component of forecasting how far and fast AI advancements might take us. Realistic estimates of this will likely support your hypothesis of the bottlenecks being in R&D and design and coordination, rather than physical throughput limits.