This is qualitatively and quantitatively similar to what I expect and AI 2027 depicts. I’m curious to get more quantitative guesses/estimates out of you. It seems like you think things will go, maybe, 2x − 4x slower than AI 2027 depicts?
Also: You have this great chart:
ector
How many times production must double to halve the cost
Overall, it looks likely that the number of robots will double 1-5 times before the robot growth rate doubles.
I feel like Wright’s Law should probably be different for different levels of intelligence. Like, for modern humans in hardware, it takes 1 − 2.5 doublings of production to halve cost, and in computer chips, it takes 0.2. But I feel like for superintelligences, the # of doublings of production needed to halve cost should be lower in both domains, because they can learn faster than humans can / require fewer experiments / require less hands-on-experience.
Yep I’d be excited to hash this out and get quantitative. Doesn’t seem like we’re too far apart.
I feel like Wright’s Law should probably be different for different levels of intelligence. Like, for modern humans in hardware, it takes 1 − 2.5 doublings of production to halve cost, and in computer chips, it takes 0.2. But I feel like for superintelligences, the # of doublings of production needed to halve cost should be lower in both domains, because they can learn faster than humans can / require fewer experiments / require less hands-on-experience.
This is a great Q.
My intuition would have been that superintelligence learns (say) 10X more per experiment. And 10X more unit produced. If that’s right, Wright’s Law won’t change. You’ll just get a one-time effect when humans are replaced by superintelligence. That one-time effect will mean that when you double production, you actually 20X the “effective production”. So you’ll get a sudden reduction in your doubling times, and then go back to the Wright’s Law pattern that we’re forecasting.
Or, to put it another way, let’s assume that with human intelligence you’d get 1 month doubling times when you have 100 billion robots. Then with superintelligene you’ll get it with just 10 billion robots. Bc you learn 10X as much per robot produced.
Would you say this goes in the other direction too? If a bunch of mediocre 10-year olds were producing robots, perhaps because their wealthy parents were forcing them to & funding them, would you model it as a one-time 10x penalty where they need to produce 10x as many robots to get to the same price point, but after that they’d be on the same curve, and after they got to e.g. 1M/yr production their robots would be just as good and just as cheap as Tesla when they are producing 100k/yr?
I think my main objection is that it just really seems like skill/intelligence should make a difference here. Like, prediction: I bet that if we had data on all car companies, we’d find that the slope of wright’s law is somewhat different from company to company… Claude seems to agree: https://claude.ai/share/5fe19152-0958-4b6e-8f09-0989aa4c75bc
hmm the 10-year olds thought experiment is interesting.
I think that they might just plateau at a much sooner point entirely? I.e. they just can’t make functioning robots at all, or can’t bring them below a hugely-expensive price, and then they stop learning from experience?
So the translation might be that we’d expect the experience curve to hit a plateau with human intelligence but to keep going to a higher plateau with superintelligence?
I bet that if we had data on all car companies, we’d find that the slope of wright’s law is somewhat different from company to company.
Agreed. Over a few OOMs, that could be a temporarily different slope due to different starting levels of technology and/or different “amount learned per unit produced”, but still the slope of the curves would become the same if you kept going for multiple OOMs. I.e. my explanation is entirely compatible with some companies trouncing others.
It seems like your assumption has some kinda wild consequences. If the slope is different then, as you go farther out on the curve, the ratio “amount learned by superintelligence per unit produced”/”amount learned by humans per unit produced” becomes increasingly extreme. Starts of at 10, but ends up >1000X. But why would we expect this ratio to increase?
What’s this about hitting plateaus though? Do experience curves hit plateaus?
Re: the ratio becoming extreme: You say this is implausible, but it’s exactly what happens when you hit a plateau! When you hit a plateau, that means that even as you stack on more OOMs of production, you can’t bring the price below a certain level.
Another argument that extreme ratios aren’t implausible: It’s what happens whenever engineers get something right on the first try, or close to the first try, that dumber people or processes could have gotten right eventually through trial and error. Possible examples: (1) Modern scientists making a new food product detect a toxic chemical in it and add an additional step to the cooking process to eliminate it. In ancient times, native cultures would have stumbled across a similar solution after thousands of years of cultural selection. (2) Modern engineers build a working rope bridge over a chasm, able to carry the desired weight (100 men?) on the first try since they have first principles physics and precise measurements. Historically ancient cultures would have been able to build this bridge too but only after ~thousand earlier failed attempts that either broke or consumed too much rope (been too expensive).
(For hundreds of thousands of years, the ‘price’ of firewood was probably about the same, despite production going up by OOMs, until the industrial revolution and mechanized logging)
I’d guess that experience curves do hit plateaus as you approach the limits of what’s possible with the current level of technology. Then you need R&D to get onto the next s-curve. If we’re combining experience curves with R&D into entirely new approaches, then i’d expect they only approach a plateau when we approach ultimate tech limits, or perhaps the ultimate limits of what humans are smart enough to ever design (like with the 10 year olds).
Agree the ratio can become extreme if humans hit a plateau but superintelligence doesn’t. But this looks like the same experience curve continuing for AIs and hitting a ceiling for humans. Whereas i thought you expected the experience curve for humans to keep going and the one for AIs to keep going at a permanently steeper rate.
I suppose if the reason for experience curves is that humans get exponentially less productive at improving tech when it becomes more complex, then maybe this exponential decay won’t apply as much to superintelligence and they could have a curve with a better slope… I think the normal understanding is that experience curves happen more because it takes exponentially more work to improve the tech when it becomes more complex—but this does seem plausible.
I like your examples about modern science vs historical trial and error. Feels like a case of massive meta-learning. Humans (through a lot of trial and error) learnt the scientific method. Then that method is way more sample efficient. Similarly, perhaps superintelligence will learn (from other areas) new ways of structuring tech development with similar gains. Then they could have massive ratios over humans, like 1:10,000. Then that either manifests as a truly massive one-time-gain (before going back to the same exp curve as humans!) as perhaps it comes in gradually and is looks more like a permanently steeper exp curve
Cool. So, I feel pretty confident that via some combination of different-slope experience curves and multiple one-time gains, ASI will be able make the industrial explosion go significantly faster than… well, how fast do you think it’ll go exactly? Your headline graph doesn’t have labels on the x-axis. It just says “Time.” Wanna try adding date labels?
Nice work!
This is qualitatively and quantitatively similar to what I expect and AI 2027 depicts. I’m curious to get more quantitative guesses/estimates out of you. It seems like you think things will go, maybe, 2x − 4x slower than AI 2027 depicts?
Also: You have this great chart:
You then say:
I feel like Wright’s Law should probably be different for different levels of intelligence. Like, for modern humans in hardware, it takes 1 − 2.5 doublings of production to halve cost, and in computer chips, it takes 0.2. But I feel like for superintelligences, the # of doublings of production needed to halve cost should be lower in both domains, because they can learn faster than humans can / require fewer experiments / require less hands-on-experience.
Thanks!
Yep I’d be excited to hash this out and get quantitative. Doesn’t seem like we’re too far apart.
This is a great Q.
My intuition would have been that superintelligence learns (say) 10X more per experiment. And 10X more unit produced. If that’s right, Wright’s Law won’t change. You’ll just get a one-time effect when humans are replaced by superintelligence. That one-time effect will mean that when you double production, you actually 20X the “effective production”. So you’ll get a sudden reduction in your doubling times, and then go back to the Wright’s Law pattern that we’re forecasting.
Or, to put it another way, let’s assume that with human intelligence you’d get 1 month doubling times when you have 100 billion robots. Then with superintelligene you’ll get it with just 10 billion robots. Bc you learn 10X as much per robot produced.
Interesting, plausible.
Would you say this goes in the other direction too? If a bunch of mediocre 10-year olds were producing robots, perhaps because their wealthy parents were forcing them to & funding them, would you model it as a one-time 10x penalty where they need to produce 10x as many robots to get to the same price point, but after that they’d be on the same curve, and after they got to e.g. 1M/yr production their robots would be just as good and just as cheap as Tesla when they are producing 100k/yr?
I think my main objection is that it just really seems like skill/intelligence should make a difference here. Like, prediction: I bet that if we had data on all car companies, we’d find that the slope of wright’s law is somewhat different from company to company… Claude seems to agree: https://claude.ai/share/5fe19152-0958-4b6e-8f09-0989aa4c75bc
hmm the 10-year olds thought experiment is interesting.
I think that they might just plateau at a much sooner point entirely? I.e. they just can’t make functioning robots at all, or can’t bring them below a hugely-expensive price, and then they stop learning from experience?
So the translation might be that we’d expect the experience curve to hit a plateau with human intelligence but to keep going to a higher plateau with superintelligence?
Agreed. Over a few OOMs, that could be a temporarily different slope due to different starting levels of technology and/or different “amount learned per unit produced”, but still the slope of the curves would become the same if you kept going for multiple OOMs. I.e. my explanation is entirely compatible with some companies trouncing others.
It seems like your assumption has some kinda wild consequences. If the slope is different then, as you go farther out on the curve, the ratio “amount learned by superintelligence per unit produced”/”amount learned by humans per unit produced” becomes increasingly extreme. Starts of at 10, but ends up >1000X. But why would we expect this ratio to increase?
What’s this about hitting plateaus though? Do experience curves hit plateaus?
Re: the ratio becoming extreme: You say this is implausible, but it’s exactly what happens when you hit a plateau! When you hit a plateau, that means that even as you stack on more OOMs of production, you can’t bring the price below a certain level.
Another argument that extreme ratios aren’t implausible: It’s what happens whenever engineers get something right on the first try, or close to the first try, that dumber people or processes could have gotten right eventually through trial and error. Possible examples: (1) Modern scientists making a new food product detect a toxic chemical in it and add an additional step to the cooking process to eliminate it. In ancient times, native cultures would have stumbled across a similar solution after thousands of years of cultural selection. (2) Modern engineers build a working rope bridge over a chasm, able to carry the desired weight (100 men?) on the first try since they have first principles physics and precise measurements. Historically ancient cultures would have been able to build this bridge too but only after ~thousand earlier failed attempts that either broke or consumed too much rope (been too expensive).
(For hundreds of thousands of years, the ‘price’ of firewood was probably about the same, despite production going up by OOMs, until the industrial revolution and mechanized logging)
Thanks—great points.
I’d guess that experience curves do hit plateaus as you approach the limits of what’s possible with the current level of technology. Then you need R&D to get onto the next s-curve. If we’re combining experience curves with R&D into entirely new approaches, then i’d expect they only approach a plateau when we approach ultimate tech limits, or perhaps the ultimate limits of what humans are smart enough to ever design (like with the 10 year olds).
Agree the ratio can become extreme if humans hit a plateau but superintelligence doesn’t. But this looks like the same experience curve continuing for AIs and hitting a ceiling for humans. Whereas i thought you expected the experience curve for humans to keep going and the one for AIs to keep going at a permanently steeper rate.
I suppose if the reason for experience curves is that humans get exponentially less productive at improving tech when it becomes more complex, then maybe this exponential decay won’t apply as much to superintelligence and they could have a curve with a better slope… I think the normal understanding is that experience curves happen more because it takes exponentially more work to improve the tech when it becomes more complex—but this does seem plausible.
I like your examples about modern science vs historical trial and error. Feels like a case of massive meta-learning. Humans (through a lot of trial and error) learnt the scientific method. Then that method is way more sample efficient. Similarly, perhaps superintelligence will learn (from other areas) new ways of structuring tech development with similar gains. Then they could have massive ratios over humans, like 1:10,000. Then that either manifests as a truly massive one-time-gain (before going back to the same exp curve as humans!) as perhaps it comes in gradually and is looks more like a permanently steeper exp curve
Cool. So, I feel pretty confident that via some combination of different-slope experience curves and multiple one-time gains, ASI will be able make the industrial explosion go significantly faster than… well, how fast do you think it’ll go exactly? Your headline graph doesn’t have labels on the x-axis. It just says “Time.” Wanna try adding date labels?