I understand “resource scarcity” but I’m confused by “coordination problems”. Can you give an example? (Sorry if that’s a stupid question.)
This is the idea that at some point in scaling up an organization you could lose efficiency due to needing more/better management, more communication (meetings) needed and longer communication processes, “bloat” in general. I’m not claiming it’s likely to happen with AI, just another possible reason for increasing marginal cost with scale.
Resource scarcity seems unlikely to bite here, at least not for long. If some product is very profitable to create, and one of its components has a shortage, then people (or AGIs) will find ways to redesign around that component.
Key resources that come to mind would be electricity and chips (and materials to produce these). I don’t know how elastic production is in these industries, but the reason I expect it to be a barrier is that you’re constrained by the slowest factor. For huge transformations or redesigning significant parts of the current AI pipeline, like using a different kind of computation, I think there’s probably lots of serial work that has to be done to make it work. I agree the problems are solvable, but it shifts from “how much demand will there be for cheap AGI” to “how fast can resources be scaled up”.
I wasn’t complaining about economists who say “the consequences of real AGI would be [crazy stuff], but I don’t expect real AGI in [time period T / ever]”. That’s fine!
Instead I was mainly complaining about the economists who have not even considered that real AGI is even a possible thing at all. Instead it’s just a big blind spot for them.
Yeah, I definitely agree.
And I don’t think this is independent of their economics training (although non-economists are obviously capable of having this blind spot too).
Instead, I think that (A) “such-and-such is just not a thing that happens in economies in the real world” and (B) “real AGI is even a conceivable possibility” are contradictory. And I think that economists are so steeped in (A) that they consider it to be a reductio ad absurdum for (B), whereas the correct response is the opposite ((B) disproves (A)).
I see how this could happen, but I’m not convinced this effect is actually happening. As you mention, many people have this blind spot. There’s people that claim AGI is already here (and evidently have a different definition of AGI). I think my crux is that this isn’t unique to economists. Some people say AGI is already here. Most non-AI people who are worried about AI seem worried that it will take their job, not all jobs. There are some people willing to accept the premise that AGI (as we define it) will exist at face value, but it seems to me that most people outside of AI that question the premise at all, end up not taking it seriously.
This is the idea that at some point in scaling up an organization you could lose efficiency due to needing more/better management, more communication (meetings) needed and longer communication processes, “bloat” in general. I’m not claiming it’s likely to happen with AI, just another possible reason for increasing marginal cost with scale.
Hmm, that would apply to an individual firm but not to a product category, right? If Firm 1 is producing so much [AGI component X] that they pile up bureaucracy and inefficiency, then Firms 2, 3, 4, and 5 will start producing [AGI component X] with less bureaucracy, and undercut Firm 1, right? If there’s an optimal firm size, the market can still be arbitrarily large via arbitrarily many independent firms of that optimal size.
(Unless Firm 1 has a key patent, or uses its market power to do anticompetitive stuff, etc. …although I don’t expect IP law or other such forces to hold internationally given the stakes of AGI.)
I see how this could happen, but I’m not convinced this effect is actually happening. … I think my crux is that this isn’t unique to economists.
It’s definitely true that non-economists are capable of dismissing AGI for bad reasons, even if this post is not mainly addressed at non-economists. I think the thing I said is a contributory factor for at least some economists, based on my experience and conversations, but not all economists, and maybe I’m just mistaken about where those people are coming from. Oh well, it’s probably not worth putting too much effort into arguing about Bulverism. Thanks for your input though.
Thanks for the thoughtful reply!
This is the idea that at some point in scaling up an organization you could lose efficiency due to needing more/better management, more communication (meetings) needed and longer communication processes, “bloat” in general. I’m not claiming it’s likely to happen with AI, just another possible reason for increasing marginal cost with scale.
Key resources that come to mind would be electricity and chips (and materials to produce these). I don’t know how elastic production is in these industries, but the reason I expect it to be a barrier is that you’re constrained by the slowest factor. For huge transformations or redesigning significant parts of the current AI pipeline, like using a different kind of computation, I think there’s probably lots of serial work that has to be done to make it work. I agree the problems are solvable, but it shifts from “how much demand will there be for cheap AGI” to “how fast can resources be scaled up”.
Yeah, I definitely agree.
I see how this could happen, but I’m not convinced this effect is actually happening. As you mention, many people have this blind spot. There’s people that claim AGI is already here (and evidently have a different definition of AGI). I think my crux is that this isn’t unique to economists. Some people say AGI is already here. Most non-AI people who are worried about AI seem worried that it will take their job, not all jobs. There are some people willing to accept the premise that AGI (as we define it) will exist at face value, but it seems to me that most people outside of AI that question the premise at all, end up not taking it seriously.
Hmm, that would apply to an individual firm but not to a product category, right? If Firm 1 is producing so much [AGI component X] that they pile up bureaucracy and inefficiency, then Firms 2, 3, 4, and 5 will start producing [AGI component X] with less bureaucracy, and undercut Firm 1, right? If there’s an optimal firm size, the market can still be arbitrarily large via arbitrarily many independent firms of that optimal size.
(Unless Firm 1 has a key patent, or uses its market power to do anticompetitive stuff, etc. …although I don’t expect IP law or other such forces to hold internationally given the stakes of AGI.)
(Separately, I think AGI will drastically increase economies of scale, particularly related to coordination problems.)
It’s definitely true that non-economists are capable of dismissing AGI for bad reasons, even if this post is not mainly addressed at non-economists. I think the thing I said is a contributory factor for at least some economists, based on my experience and conversations, but not all economists, and maybe I’m just mistaken about where those people are coming from. Oh well, it’s probably not worth putting too much effort into arguing about Bulverism. Thanks for your input though.