I think you basically ignore the existing wisdom of what limits the size of firms and try to explain the limits of the size of companies with a model that doesn’t tell us very much about how companies work.
We have antitrust laws. There’s the Innovator’s Dilemma as described by Clayton Christensen that explains why companies decide against doing certain business. Markets often outperform hierarchical decision-making. Uber could be a lot bigger if they would employ all their drivers and own all the vehicles but they rather not do that part of the business and use market dynamics.
Uber would be a lot bigger if they would employ all the drivers as employees. Managing people is often adds inefficiencies. The more layers of management you have in an organization the worse the incentive alignment happens to be.
If you add a bunch of junior programs into a software project it might very well slow the project down because it takes effort for the more experienced programmers to manage the junior programmers. GitHub Copilot on the other hand makes an experienced programmer more productive without adding friction about managing junior employees.
Some technologies eventually encounter fundamental limits. The rocket equation makes it difficult to reach orbit from Earth’s gravity well; if the planet were even moderately larger, it would be nearly impossible. It’s conceivable that some sort of complexity principle makes it increasingly difficult to increase raw intelligence much beyond the human level, as the number of facts to keep in mind and the subtlety of the connections to be made increases[7].
We can look at a skill that’s about applying human intelligence like playing Go. It would be possible that the maximum skill level is near what professional go players are able to accomplish. AlphaGo managed to go very much past what humans can accomplish in a very short timeframe and AlphaGo doesn’t even do any self-recursive editing of it’s own code.
GPU capacity will not be increasing at the same pace as the (virtual) worker population, and we will be running into a lack of superhuman training data, the generally increasing difficulty of progress, and the possibility of a complexity explosion.
AI can help with producing GPU’s as well. It’s possible to direct a lot more of the worlds economic output into producing GPU’s than is currently done.
Sure, it’s easy to imagine scenarios where a specific given company could be larger than it is today. But are you envisioning that if we eliminated antitrust laws and made a few other specific changes, then it would become plausible for a single company to take over the entire economy?
My thesis boils down to the simple assertion that feedback loops need not diverge indefinitely, exponential growth can resolve into an S-curve. In the case of a corporation, the technological advantages, company culture, and other factors that allow a company to thrive in one domain (e.g. Google, web search) might not serve it well in another domain (Google, social networks). In the case of AI self-improvement, it might turn out that we eventually enter a domain – for instance, the point where we’ve exhausted human-generated training data – where the cognitive effort required to push capabilities forwards increases faster than the cognitive effort supplied by those same capabilities. In other words, we might reach a point where each successive generation of recursively-designed AI delivers a decreasing improvement over its predecessor. Note that I don’t claim this is guaranteed to happen, I merely argue that it is possible, but that seems to be enough of a claim to be controversial.
We can look at a skill that’s about applying human intelligence like playing Go. It would be possible that the maximum skill level is near what professional go players are able to accomplish. AlphaGo managed to go very much past what humans can accomplish in a very short timeframe and AlphaGo doesn’t even do any self-recursive editing of it’s own code.
Certainly. I think we see that the ease with which computers can definitively surpass humans depends on the domain. For multiplying large numbers, it’s no contest at all. For Go, computers win definitively, but by a smaller margin than for multiplication. Perhaps, as we move toward more and more complex and open-ended problems, it will get harder and harder to leave humans in the dust? (Not impossible, just harder?) I discuss this briefly in a recent blog post, I’d love to hear thoughts / evidence in either direction.
AI can help with producing GPU’s as well. It’s possible to direct a lot more of the worlds economic output into producing GPU’s than is currently done.
Sure. I’m just suggesting that the self-improvement feedback loop would be slower here, because designing and deploying a new generation of fab equipment has a much longer cycle time than training a new model, no?
Perhaps, as we move toward more and more complex and open-ended problems, it will get harder and harder to leave humans in the dust?
A key issue with training AIs for open-ended problems is that’s a lot harder to create good training data for open-ended problems then it is to create high-quality training data for a game with clear rules.
It’s worth noting that one of the problems where humans outperform computers right now are not really the open-ended tasks but things like how to fold laundry.
A key difference between playing go well and being able to fold laundry well is that training data is easier to come by for go.
If you look at the quality that a lot of professionals make when it comes to a lot of decisions involving probability (meaning there’s a lot of uncertainty) they are pretty bad.
Sure. I’m just suggesting that the self-improvement feedback loop would be slower here, because designing and deploying a new generation of fab equipment has a much longer cycle time than training a new model, no?
You don’t need a new generation of fab equipment to make advances in GPU design. A lot of improvements of the last few years were not about having constantly a new generation of fab equipment.
You don’t need a new generation of fab equipment to make advances in GPU design. A lot of improvements of the last few years were not about having constantly a new generation of fab equipment.
Ah, by “producing GPUs” I thought you meant physical manufacturing. Yes, there has been rapid progress of late in getting more FLOPs per transistor for training and inference workloads, and yes, RSI will presumably have an impact here. The cycle time would still be slower than for software: an improved model can be immediately deployed to all existing GPUs, while an improved GPU design only impacts chips produced in the future.
Ah, by “producing GPUs” I thought you meant physical manufacturing.
Yes, that’s not just about new generations of fab equipment.
GPU performance for training models did increase faster than Moore’s law over the last decade. It’s not something where the curve of improvement is slow even without AI.
I think you basically ignore the existing wisdom of what limits the size of firms and try to explain the limits of the size of companies with a model that doesn’t tell us very much about how companies work.
We have antitrust laws. There’s the Innovator’s Dilemma as described by Clayton Christensen that explains why companies decide against doing certain business. Markets often outperform hierarchical decision-making. Uber could be a lot bigger if they would employ all their drivers and own all the vehicles but they rather not do that part of the business and use market dynamics.
Uber would be a lot bigger if they would employ all the drivers as employees. Managing people is often adds inefficiencies. The more layers of management you have in an organization the worse the incentive alignment happens to be.
If you add a bunch of junior programs into a software project it might very well slow the project down because it takes effort for the more experienced programmers to manage the junior programmers. GitHub Copilot on the other hand makes an experienced programmer more productive without adding friction about managing junior employees.
We can look at a skill that’s about applying human intelligence like playing Go. It would be possible that the maximum skill level is near what professional go players are able to accomplish. AlphaGo managed to go very much past what humans can accomplish in a very short timeframe and AlphaGo doesn’t even do any self-recursive editing of it’s own code.
AI can help with producing GPU’s as well. It’s possible to direct a lot more of the worlds economic output into producing GPU’s than is currently done.
Sure, it’s easy to imagine scenarios where a specific given company could be larger than it is today. But are you envisioning that if we eliminated antitrust laws and made a few other specific changes, then it would become plausible for a single company to take over the entire economy?
My thesis boils down to the simple assertion that feedback loops need not diverge indefinitely, exponential growth can resolve into an S-curve. In the case of a corporation, the technological advantages, company culture, and other factors that allow a company to thrive in one domain (e.g. Google, web search) might not serve it well in another domain (Google, social networks). In the case of AI self-improvement, it might turn out that we eventually enter a domain – for instance, the point where we’ve exhausted human-generated training data – where the cognitive effort required to push capabilities forwards increases faster than the cognitive effort supplied by those same capabilities. In other words, we might reach a point where each successive generation of recursively-designed AI delivers a decreasing improvement over its predecessor. Note that I don’t claim this is guaranteed to happen, I merely argue that it is possible, but that seems to be enough of a claim to be controversial.
Certainly. I think we see that the ease with which computers can definitively surpass humans depends on the domain. For multiplying large numbers, it’s no contest at all. For Go, computers win definitively, but by a smaller margin than for multiplication. Perhaps, as we move toward more and more complex and open-ended problems, it will get harder and harder to leave humans in the dust? (Not impossible, just harder?) I discuss this briefly in a recent blog post, I’d love to hear thoughts / evidence in either direction.
Sure. I’m just suggesting that the self-improvement feedback loop would be slower here, because designing and deploying a new generation of fab equipment has a much longer cycle time than training a new model, no?
A key issue with training AIs for open-ended problems is that’s a lot harder to create good training data for open-ended problems then it is to create high-quality training data for a game with clear rules.
It’s worth noting that one of the problems where humans outperform computers right now are not really the open-ended tasks but things like how to fold laundry.
A key difference between playing go well and being able to fold laundry well is that training data is easier to come by for go.
If you look at the quality that a lot of professionals make when it comes to a lot of decisions involving probability (meaning there’s a lot of uncertainty) they are pretty bad.
You don’t need a new generation of fab equipment to make advances in GPU design. A lot of improvements of the last few years were not about having constantly a new generation of fab equipment.
Ah, by “producing GPUs” I thought you meant physical manufacturing. Yes, there has been rapid progress of late in getting more FLOPs per transistor for training and inference workloads, and yes, RSI will presumably have an impact here. The cycle time would still be slower than for software: an improved model can be immediately deployed to all existing GPUs, while an improved GPU design only impacts chips produced in the future.
Yes, that’s not just about new generations of fab equipment.
GPU performance for training models did increase faster than Moore’s law over the last decade. It’s not something where the curve of improvement is slow even without AI.