I agree that economists make some implicit assumptions about what AGI will look like that should be more explicit. But, I disagree with several points in this post.
On equilibrium: A market will equilibriate when the supply and demand is balanced at the current price point. At any given instant this can happen for a market even with AGI (sellers increase price until buyers are not willing to buy). Being at an equilibrium doesn’t imply the supply, demand, and price won’t change over time. Economists are very familiar with growth and various kinds of dynamic equilibria.
Equilibria aside, it is an interesting point that AGI combines aspects of both labor and capital in novel ways. Being able to both replicate and work in autonomous ways could create very interesting feedback loops.
Still, there could be limits and negative feedback to the feedback loops you point out. The idea that labor adds value and costs go down with scale are usually true but not universal. Things like resource scarcity or coordination problems can cause increasing marginal cost with scale. If there are very powerful AGI and very fast takeoffs, I expect resource scarcity to be a constraint.
I agree that AGI could break usual intuitions about capital and labor. However, I don’t think this is misleading economists. I think economists don’t consider AGI launching coups or pursuing jobs/entrepreneurship independently because they don’t expect it to have those capabilities or dispositions, not that they conflate it with inanimate capital. Even in the post linked, Tyler Cowen says that “I don’t think the economics of AI are well-defined by either “an increase in labor supply,” “an increase in TFP,” or “an increase in capital,” though it is some of each of those.”
Lastly, I fully agree that GDP doesn’t capture everything of value—even now it completely misses value from free resources like wikipedia and unpaid labor like housework, and can underestimate the value of new technology. Still, if AGI transforms many industries as it would likely need to in order to transform the world, real GDP would capture this.
All in all, I don’t think economics principles are misleading. Maybe Econ thinking will have to be expanded to deal with AGI. But right now, the difference in the economists and lesswrongers comes down to what capabilities they expect AGI to have.
Thanks. I don’t think we disagree much (more in emphasis than content).
Things like resource scarcity or coordination problems can cause increasing marginal cost with scale.
I understand “resource scarcity” but I’m confused by “coordination problems”. Can you give an example? (Sorry if that’s a stupid question.)
Resource scarcity seems unlikely to bite here, at least not for long. If some product is very profitable to create, and one of its components has a shortage, then people (or AGIs) will find ways to redesign around that component. AGI does not fundamentally need any rare components. Biology proves that it is possible to build human-level computing devices from sugar and water and oxygen (i.e. brains). As for electricity, there’s plenty of solar cells, and plenty of open land for solar cells, and permitting is easy if you’re off-grid.
(I agree that the positive feedback loop will not spin out to literally infinity in literally zero time, but stand by “light-years beyond anything in economic history”.)
I think economists don’t consider AGI launching coups or pursuing jobs/entrepreneurship independently because they don’t expect it to have those capabilities or dispositions, not that they conflate it with inanimate capital. … right now, the difference in the economists and lesswrongers comes down to what capabilities they expect AGI to have.
I wasn’t complaining about economists who say “the consequences of real AGI would be [crazy stuff], but I don’t expect real AGI in [time period T / ever]”. That’s fine!
(Well, actually I would still complain if they state this as obvious, rather than owning the fact that they are siding with one group of AI domain experts over a different group of AI domain experts, about a technical AI issue on which the economists themselves have no expertise. And if T is more than, I dunno, 30 years, then that makes it even worse, because then the economists would be siding with a dwindling minority of AI domain experts over a growing majority, I think.)
Instead I was mainly complaining about the economists who have not even considered that real AGI is even a possible thing at all. Instead it’s just a big blind spot for them.
And I don’t think this is independent of their economics training (although non-economists are obviously capable of having this blind spot too).
Instead, I think that (A) “such-and-such is just not a thing that happens in economies in the real world” and (B) “real AGI is even a conceivable possibility” are contradictory. And I think that economists are so steeped in (A) that they consider it to be a reductio ad absurdum for (B), whereas the correct response is the opposite ((B) disproves (A)).
For them, real AGI does not compute, it’s like a square circle, and people like me who talk about it are not just saying something false but saying incoherent nonsense, or maybe they think they’re misunderstanding us and they’ll “charitably” round what I’m saying to something quite different, and they themselves will use terms like “AGI” or “ASI” for something much weaker without realizing that they’re doing so.
I understand “resource scarcity” but I’m confused by “coordination problems”. Can you give an example? (Sorry if that’s a stupid question.)
This is the idea that at some point in scaling up an organization you could lose efficiency due to needing more/better management, more communication (meetings) needed and longer communication processes, “bloat” in general. I’m not claiming it’s likely to happen with AI, just another possible reason for increasing marginal cost with scale.
Resource scarcity seems unlikely to bite here, at least not for long. If some product is very profitable to create, and one of its components has a shortage, then people (or AGIs) will find ways to redesign around that component.
Key resources that come to mind would be electricity and chips (and materials to produce these). I don’t know how elastic production is in these industries, but the reason I expect it to be a barrier is that you’re constrained by the slowest factor. For huge transformations or redesigning significant parts of the current AI pipeline, like using a different kind of computation, I think there’s probably lots of serial work that has to be done to make it work. I agree the problems are solvable, but it shifts from “how much demand will there be for cheap AGI” to “how fast can resources be scaled up”.
I wasn’t complaining about economists who say “the consequences of real AGI would be [crazy stuff], but I don’t expect real AGI in [time period T / ever]”. That’s fine!
Instead I was mainly complaining about the economists who have not even considered that real AGI is even a possible thing at all. Instead it’s just a big blind spot for them.
Yeah, I definitely agree.
And I don’t think this is independent of their economics training (although non-economists are obviously capable of having this blind spot too).
Instead, I think that (A) “such-and-such is just not a thing that happens in economies in the real world” and (B) “real AGI is even a conceivable possibility” are contradictory. And I think that economists are so steeped in (A) that they consider it to be a reductio ad absurdum for (B), whereas the correct response is the opposite ((B) disproves (A)).
I see how this could happen, but I’m not convinced this effect is actually happening. As you mention, many people have this blind spot. There’s people that claim AGI is already here (and evidently have a different definition of AGI). I think my crux is that this isn’t unique to economists. Some people say AGI is already here. Most non-AI people who are worried about AI seem worried that it will take their job, not all jobs. There are some people willing to accept the premise that AGI (as we define it) will exist at face value, but it seems to me that most people outside of AI that question the premise at all, end up not taking it seriously.
This is the idea that at some point in scaling up an organization you could lose efficiency due to needing more/better management, more communication (meetings) needed and longer communication processes, “bloat” in general. I’m not claiming it’s likely to happen with AI, just another possible reason for increasing marginal cost with scale.
Hmm, that would apply to an individual firm but not to a product category, right? If Firm 1 is producing so much [AGI component X] that they pile up bureaucracy and inefficiency, then Firms 2, 3, 4, and 5 will start producing [AGI component X] with less bureaucracy, and undercut Firm 1, right? If there’s an optimal firm size, the market can still be arbitrarily large via arbitrarily many independent firms of that optimal size.
(Unless Firm 1 has a key patent, or uses its market power to do anticompetitive stuff, etc. …although I don’t expect IP law or other such forces to hold internationally given the stakes of AGI.)
I see how this could happen, but I’m not convinced this effect is actually happening. … I think my crux is that this isn’t unique to economists.
It’s definitely true that non-economists are capable of dismissing AGI for bad reasons, even if this post is not mainly addressed at non-economists. I think the thing I said is a contributory factor for at least some economists, based on my experience and conversations, but not all economists, and maybe I’m just mistaken about where those people are coming from. Oh well, it’s probably not worth putting too much effort into arguing about Bulverism. Thanks for your input though.
I agree that economists make some implicit assumptions about what AGI will look like that should be more explicit. But, I disagree with several points in this post.
On equilibrium: A market will equilibriate when the supply and demand is balanced at the current price point. At any given instant this can happen for a market even with AGI (sellers increase price until buyers are not willing to buy). Being at an equilibrium doesn’t imply the supply, demand, and price won’t change over time. Economists are very familiar with growth and various kinds of dynamic equilibria.
Equilibria aside, it is an interesting point that AGI combines aspects of both labor and capital in novel ways. Being able to both replicate and work in autonomous ways could create very interesting feedback loops.
Still, there could be limits and negative feedback to the feedback loops you point out. The idea that labor adds value and costs go down with scale are usually true but not universal. Things like resource scarcity or coordination problems can cause increasing marginal cost with scale. If there are very powerful AGI and very fast takeoffs, I expect resource scarcity to be a constraint.
I agree that AGI could break usual intuitions about capital and labor. However, I don’t think this is misleading economists. I think economists don’t consider AGI launching coups or pursuing jobs/entrepreneurship independently because they don’t expect it to have those capabilities or dispositions, not that they conflate it with inanimate capital. Even in the post linked, Tyler Cowen says that “I don’t think the economics of AI are well-defined by either “an increase in labor supply,” “an increase in TFP,” or “an increase in capital,” though it is some of each of those.”
Lastly, I fully agree that GDP doesn’t capture everything of value—even now it completely misses value from free resources like wikipedia and unpaid labor like housework, and can underestimate the value of new technology. Still, if AGI transforms many industries as it would likely need to in order to transform the world, real GDP would capture this.
All in all, I don’t think economics principles are misleading. Maybe Econ thinking will have to be expanded to deal with AGI. But right now, the difference in the economists and lesswrongers comes down to what capabilities they expect AGI to have.
Thanks. I don’t think we disagree much (more in emphasis than content).
I understand “resource scarcity” but I’m confused by “coordination problems”. Can you give an example? (Sorry if that’s a stupid question.)
Resource scarcity seems unlikely to bite here, at least not for long. If some product is very profitable to create, and one of its components has a shortage, then people (or AGIs) will find ways to redesign around that component. AGI does not fundamentally need any rare components. Biology proves that it is possible to build human-level computing devices from sugar and water and oxygen (i.e. brains). As for electricity, there’s plenty of solar cells, and plenty of open land for solar cells, and permitting is easy if you’re off-grid.
(I agree that the positive feedback loop will not spin out to literally infinity in literally zero time, but stand by “light-years beyond anything in economic history”.)
I wasn’t complaining about economists who say “the consequences of real AGI would be [crazy stuff], but I don’t expect real AGI in [time period T / ever]”. That’s fine!
(Well, actually I would still complain if they state this as obvious, rather than owning the fact that they are siding with one group of AI domain experts over a different group of AI domain experts, about a technical AI issue on which the economists themselves have no expertise. And if T is more than, I dunno, 30 years, then that makes it even worse, because then the economists would be siding with a dwindling minority of AI domain experts over a growing majority, I think.)
Instead I was mainly complaining about the economists who have not even considered that real AGI is even a possible thing at all. Instead it’s just a big blind spot for them.
And I don’t think this is independent of their economics training (although non-economists are obviously capable of having this blind spot too).
Instead, I think that (A) “such-and-such is just not a thing that happens in economies in the real world” and (B) “real AGI is even a conceivable possibility” are contradictory. And I think that economists are so steeped in (A) that they consider it to be a reductio ad absurdum for (B), whereas the correct response is the opposite ((B) disproves (A)).
For them, real AGI does not compute, it’s like a square circle, and people like me who talk about it are not just saying something false but saying incoherent nonsense, or maybe they think they’re misunderstanding us and they’ll “charitably” round what I’m saying to something quite different, and they themselves will use terms like “AGI” or “ASI” for something much weaker without realizing that they’re doing so.
Thanks for the thoughtful reply!
This is the idea that at some point in scaling up an organization you could lose efficiency due to needing more/better management, more communication (meetings) needed and longer communication processes, “bloat” in general. I’m not claiming it’s likely to happen with AI, just another possible reason for increasing marginal cost with scale.
Key resources that come to mind would be electricity and chips (and materials to produce these). I don’t know how elastic production is in these industries, but the reason I expect it to be a barrier is that you’re constrained by the slowest factor. For huge transformations or redesigning significant parts of the current AI pipeline, like using a different kind of computation, I think there’s probably lots of serial work that has to be done to make it work. I agree the problems are solvable, but it shifts from “how much demand will there be for cheap AGI” to “how fast can resources be scaled up”.
Yeah, I definitely agree.
I see how this could happen, but I’m not convinced this effect is actually happening. As you mention, many people have this blind spot. There’s people that claim AGI is already here (and evidently have a different definition of AGI). I think my crux is that this isn’t unique to economists. Some people say AGI is already here. Most non-AI people who are worried about AI seem worried that it will take their job, not all jobs. There are some people willing to accept the premise that AGI (as we define it) will exist at face value, but it seems to me that most people outside of AI that question the premise at all, end up not taking it seriously.
Hmm, that would apply to an individual firm but not to a product category, right? If Firm 1 is producing so much [AGI component X] that they pile up bureaucracy and inefficiency, then Firms 2, 3, 4, and 5 will start producing [AGI component X] with less bureaucracy, and undercut Firm 1, right? If there’s an optimal firm size, the market can still be arbitrarily large via arbitrarily many independent firms of that optimal size.
(Unless Firm 1 has a key patent, or uses its market power to do anticompetitive stuff, etc. …although I don’t expect IP law or other such forces to hold internationally given the stakes of AGI.)
(Separately, I think AGI will drastically increase economies of scale, particularly related to coordination problems.)
It’s definitely true that non-economists are capable of dismissing AGI for bad reasons, even if this post is not mainly addressed at non-economists. I think the thing I said is a contributory factor for at least some economists, based on my experience and conversations, but not all economists, and maybe I’m just mistaken about where those people are coming from. Oh well, it’s probably not worth putting too much effort into arguing about Bulverism. Thanks for your input though.