Thanks for the thoughtful reply!
I understand “resource scarcity” but I’m confused by “coordination problems”. Can you give an example? (Sorry if that’s a stupid question.)
This is the idea that at some point in scaling up an organization you could lose efficiency due to needing more/better management, more communication (meetings) needed and longer communication processes, “bloat” in general. I’m not claiming it’s likely to happen with AI, just another possible reason for increasing marginal cost with scale.
Resource scarcity seems unlikely to bite here, at least not for long. If some product is very profitable to create, and one of its components has a shortage, then people (or AGIs) will find ways to redesign around that component.
Key resources that come to mind would be electricity and chips (and materials to produce these). I don’t know how elastic production is in these industries, but the reason I expect it to be a barrier is that you’re constrained by the slowest factor. For huge transformations or redesigning significant parts of the current AI pipeline, like using a different kind of computation, I think there’s probably lots of serial work that has to be done to make it work. I agree the problems are solvable, but it shifts from “how much demand will there be for cheap AGI” to “how fast can resources be scaled up”.
I wasn’t complaining about economists who say “the consequences of real AGI would be [crazy stuff], but I don’t expect real AGI in [time period T / ever]”. That’s fine!
Instead I was mainly complaining about the economists who have not even considered that real AGI is even a possible thing at all. Instead it’s just a big blind spot for them.
Yeah, I definitely agree.
And I don’t think this is independent of their economics training (although non-economists are obviously capable of having this blind spot too).
Instead, I think that (A) “such-and-such is just not a thing that happens in economies in the real world” and (B) “real AGI is even a conceivable possibility” are contradictory. And I think that economists are so steeped in (A) that they consider it to be a reductio ad absurdum for (B), whereas the correct response is the opposite ((B) disproves (A)).
I see how this could happen, but I’m not convinced this effect is actually happening. As you mention, many people have this blind spot. There’s people that claim AGI is already here (and evidently have a different definition of AGI). I think my crux is that this isn’t unique to economists. Some people say AGI is already here. Most non-AI people who are worried about AI seem worried that it will take their job, not all jobs. There are some people willing to accept the premise that AGI (as we define it) will exist at face value, but it seems to me that most people outside of AI that question the premise at all, end up not taking it seriously.
I had the same thought. Some of the graphs, on first glance seem to have an inflection point at ChatGPT release, but looking more seem like the trend started before ChatGPT. Like these seem to show even at the beginning in early 2021 more exposed jobs were increasing at a slower rate than less exposed jobs. I also agree the story could be true, but I’m not sure these graphs are strong evidence without more analysis.