Forgive my language, but this paragraph looks to me like an example of tech people being a bit too full of themselves sometimes. The IT-sector is clearly a cherry-picked example and cannot be extrapolated to the rest of the economy. It’s also not a good proxy for utilons...
I think you’ve missed a key point here. The argument did not actually rely on extrapolating to the rest of the economy. The intuitive claim was that the IT sector by itself seemed to have grown enough that GDP should have grown by a factor of hundreds. So cherry-picking is irrelevant; if there is any such sector, then that’s enough for the argument.
I do agree that it’s not a good proxy for utilons.
Now, this part is where I think you’ve correctly identified the key issue (although I disagree with the “how it should be” bit):
And I argue that that’s how it should be—a transistor is on average performing much more important tasks in 1960, like planning trajectories for moon missions or running banking systems, than in 2021, like allowing people to watch TikTok videos or play games in HD. On the other hand, people still need houses to live in no matter how fancy their smartphones become. For average people, real estate is genuinely a bigger deal now than even a massive increase in their phone’s camera resolution.
This is a decreasing marginal returns argument; the billionth iPhone is worth a lot less than the tenth. But it’s not like there’s an easily-identifiable “correct” price point to use on that curve; some iPhones do in fact provide a lot more value than others. After all, the first ten or a thousand iPhones could probably have sold for a price orders of magnitude higher (even without signalling value). If we just use current prices, then we’re underestimating the iPhone’s value contribution.
Your story about the longevity pill is great, and I generally agree with the conclusion at the end.
True. Still, using 1960′s prices with current production assumes a 1960 flat demand curve, right? It’s like using off-season avocado prices when no one buys them to compute real GDP during avocado season.
But it’s not like there’s an easily-identifiable “correct” price point to use on that curve
Shouldn’t we measure something like social surplus produced each year? (In efficient markets without externalities, this would be producer surplus given perfect price discrimination.)
(I think my comment here is slightly wrong but probably right in spirit; I don’t have time to think it out right now.)
So, there’s this general problem in economics where economists want to talk about what we “should” do in policy debates, and that justifies quantifying things in terms of e.g. social surplus (or whatever), on the basis that we want policies to increase social surplus (or whatever).
The problem with this is that such metrics are not chosen for robust generalization to many different use-cases, so unsurprisingly they don’t generalize very well to other use-cases. For instance, if we want to make predictions about the probable trajectory of AI based on the smoothness of some metric of economic impact of technologies, social surplus does not seem like a particularly great metric for that purpose.
I don’t think that’s what I mean. If we use 1950s real prices, then we’re overestimating the value of the transistor production because we’re multiplying quantity by the price very early on the marginal utility curve, when they’re still marginally fulfilling extremely high value use cases. Conversely, if we use current prices, we’re underestimating the GDP contribution. So it seemed to me that we should integrate along the willingness to pay curve, which I think gets us something like total surplus.
(There are a few wrinkles in that the rest of the economy has also changed since the 1950s, and so i imagine that will throw some more subtle problems.)
That would indeed be the right way to estimate total surplus. The problem is that total surplus is not obviously the right metric to worry about. For a use case like forecasting AI, for instance, it’s not particularly central.
No opinion because I haven’t thought about that use case. My comment was intended to answer “how do you actually measure an idealized version of a GDP growth curve”—minimizing strangeness which depends on the reference year—without considering its usefulness for forecasting AI.
Overall, I think the right way to think about GDP growth in relation to utilons is that it’s a combination of removing trivial inconveniences for large numbers of people, while solving mission-critical problems for a few people, and occasionally creating positive or negative externalities through network effects that have to be analyzed on an individual basis.
There’s an argument out there that as the economy tackles low-hanging fruit, increasing innovation becomes harder to achieve. Stagnation sets in. I think this is an incomplete framing that is misleading over longer time scales.
When we think in terms of years, there are a set of tractable technological achievements, and the low-hanging fruit metaphor is appropriate here. There’s a set of problems we basically know how to solve. We put in the work to solve them roughly in order of priority, and see diminishing returns on our investment.
However, one of the knock-on effects of solving these problems is that they open up formerly intractable problems and inaccessible resources.
For example, sequencing the human genome was one an expensive and time-consuming project. It came half a century after the structure of DNA was determined, and represented the culmination of our understanding of the genome to that point.
Once we’d achieved that high-hanging fruit, however, the endeavor itself created a network of highly-skilled scientists with the knowhow to make the process cheaper and more reliable. Now it’s relatively cheap to sequence the genome. Cheap sequencing gives us access to massive amounts of genetic data. Cheap compute lets us gather and process big health data, and interpret it in the light of genetic data. All together, this lets us refocus our scientific efforts in more productive directions.
Doing all this would simply not have been possible at an earlier technological era. But the network effects that make the new wave of growth possible take time to accumulate. It takes time to build out the highway system or the internet, to figure out how to automate production of a useful product.
So we’ll see some time delay between inventing the tech that enables a network, the growth of that network to its full potential, and the harnessing of that network to drive a new wave of technological innovation.
We can’t assume that these “waves of innovation” have diminishing value over time, the way that we can assume that the automation of specific products produces diminishing value as more consumers gain access to them. They deliver value by two different mechanisms. Individual products solve particular problems. “Waves of innovation” give rise to entirely different classes of products, which may turn out to deliver widely varying average levels of utility. Even if a particular wave of innovation delivers a very high level of average utility, even an entirely efficient market can’t shortcut the technological and network growth barriers to implementing that wave. It just takes time, and the work has to be done in a certain order. The exact outcomes are not predictable in advance.
So from a local perspective on the order of years, we should focus on the diminishing returns story. On the order of decades, though, we should focus on the “waves of innovation” and network effects story, where diminishing returns is not operating.
I think you’ve missed a key point here. The argument did not actually rely on extrapolating to the rest of the economy. The intuitive claim was that the IT sector by itself seemed to have grown enough that GDP should have grown by a factor of hundreds. So cherry-picking is irrelevant; if there is any such sector, then that’s enough for the argument.
I do agree that it’s not a good proxy for utilons.
Now, this part is where I think you’ve correctly identified the key issue (although I disagree with the “how it should be” bit):
This is a decreasing marginal returns argument; the billionth iPhone is worth a lot less than the tenth. But it’s not like there’s an easily-identifiable “correct” price point to use on that curve; some iPhones do in fact provide a lot more value than others. After all, the first ten or a thousand iPhones could probably have sold for a price orders of magnitude higher (even without signalling value). If we just use current prices, then we’re underestimating the iPhone’s value contribution.
Your story about the longevity pill is great, and I generally agree with the conclusion at the end.
True. Still, using 1960′s prices with current production assumes a 1960 flat demand curve, right? It’s like using off-season avocado prices when no one buys them to compute real GDP during avocado season.
Shouldn’t we measure something like social surplus produced each year? (In efficient markets without externalities, this would be producer surplus given perfect price discrimination.)
(I think my comment here is slightly wrong but probably right in spirit; I don’t have time to think it out right now.)
So, there’s this general problem in economics where economists want to talk about what we “should” do in policy debates, and that justifies quantifying things in terms of e.g. social surplus (or whatever), on the basis that we want policies to increase social surplus (or whatever).
The problem with this is that such metrics are not chosen for robust generalization to many different use-cases, so unsurprisingly they don’t generalize very well to other use-cases. For instance, if we want to make predictions about the probable trajectory of AI based on the smoothness of some metric of economic impact of technologies, social surplus does not seem like a particularly great metric for that purpose.
I don’t think that’s what I mean. If we use 1950s real prices, then we’re overestimating the value of the transistor production because we’re multiplying quantity by the price very early on the marginal utility curve, when they’re still marginally fulfilling extremely high value use cases. Conversely, if we use current prices, we’re underestimating the GDP contribution. So it seemed to me that we should integrate along the willingness to pay curve, which I think gets us something like total surplus.
(There are a few wrinkles in that the rest of the economy has also changed since the 1950s, and so i imagine that will throw some more subtle problems.)
That would indeed be the right way to estimate total surplus. The problem is that total surplus is not obviously the right metric to worry about. For a use case like forecasting AI, for instance, it’s not particularly central.
No opinion because I haven’t thought about that use case. My comment was intended to answer “how do you actually measure an idealized version of a GDP growth curve”—minimizing strangeness which depends on the reference year—without considering its usefulness for forecasting AI.
Overall, I think the right way to think about GDP growth in relation to utilons is that it’s a combination of removing trivial inconveniences for large numbers of people, while solving mission-critical problems for a few people, and occasionally creating positive or negative externalities through network effects that have to be analyzed on an individual basis.
There’s an argument out there that as the economy tackles low-hanging fruit, increasing innovation becomes harder to achieve. Stagnation sets in. I think this is an incomplete framing that is misleading over longer time scales.
When we think in terms of years, there are a set of tractable technological achievements, and the low-hanging fruit metaphor is appropriate here. There’s a set of problems we basically know how to solve. We put in the work to solve them roughly in order of priority, and see diminishing returns on our investment.
However, one of the knock-on effects of solving these problems is that they open up formerly intractable problems and inaccessible resources.
For example, sequencing the human genome was one an expensive and time-consuming project. It came half a century after the structure of DNA was determined, and represented the culmination of our understanding of the genome to that point.
Once we’d achieved that high-hanging fruit, however, the endeavor itself created a network of highly-skilled scientists with the knowhow to make the process cheaper and more reliable. Now it’s relatively cheap to sequence the genome. Cheap sequencing gives us access to massive amounts of genetic data. Cheap compute lets us gather and process big health data, and interpret it in the light of genetic data. All together, this lets us refocus our scientific efforts in more productive directions.
Doing all this would simply not have been possible at an earlier technological era. But the network effects that make the new wave of growth possible take time to accumulate. It takes time to build out the highway system or the internet, to figure out how to automate production of a useful product.
So we’ll see some time delay between inventing the tech that enables a network, the growth of that network to its full potential, and the harnessing of that network to drive a new wave of technological innovation.
We can’t assume that these “waves of innovation” have diminishing value over time, the way that we can assume that the automation of specific products produces diminishing value as more consumers gain access to them. They deliver value by two different mechanisms. Individual products solve particular problems. “Waves of innovation” give rise to entirely different classes of products, which may turn out to deliver widely varying average levels of utility. Even if a particular wave of innovation delivers a very high level of average utility, even an entirely efficient market can’t shortcut the technological and network growth barriers to implementing that wave. It just takes time, and the work has to be done in a certain order. The exact outcomes are not predictable in advance.
So from a local perspective on the order of years, we should focus on the diminishing returns story. On the order of decades, though, we should focus on the “waves of innovation” and network effects story, where diminishing returns is not operating.