Human cognition is fundamentally limited by biological drives, tiredness, boredom, limited working memory, and low precision. Humans can’t recursively improve their own minds and so our exponential growth rate is constant. An AI’s improvement rate will not be constant and so I think it is unreasonable to estimate the rate of exponential growth of an AI based on how long it takes human researchers to develop an AI with an equivalent level of ability.
For instance, let’s say that in 2080 we develop an AI capable of designing itself from scratch in exactly 80 years. However, the AI does not have to recreate itself from scratch and presumably it does not need to wait 80 years to improve itself. For example let’s just assume that the AI can upgrade itself once per year and the effects are cumulative. Let’s also assume that it can direct its entire output into improving itself. This means that after 1 year of 1.25% growth in capability it is fundamentally more capable at improving itself (101.25% as capable, in fact). Assuming that the growth rate is directly proportional to its current capability in the next year instead of 1.25% growth it will experience 2.5% growth. The year after that, 6%. The AI would double in capability in 5 years. Practically, the development of hardware will be a hard limit on the rapidity that an AI can improve itself, so 5 years may be a stretch.
If we factor in Moore’s Law we could talk about an AI that reaches a point in 2080 such that even with Moore’s Law it will take another 80 years to reproduce itself from scratch, e.g. that nearly half of the entire workload would be done in 2158 and 2159. The growth of such an AI would be much slower because it would have logarithmically fewer resources available in 2080 compared to 2160 (and in fact by 2161 it would be capable of doubling every year). Such an AI would have to be very weak compared to human researchers to require such a long time of 100%/18-month growth in computing power, so I don’t think it’s very meaningful to try to scale the problem this way. After all, the AI research field has not been doubling in capability every 18 months since the 1950s. So it makes sense to talk about an AI in 2080 that, if ran on the hardware of 2080, would take another 80 years to develop itself from scratch. I am fairly confident that allowing it to self-improve on improving hardware would lead to hard takeoff within a period of a few years.
Humans can’t recursively improve their own minds and so our exponential growth rate is constant.
Not all of the human thought process goes on inside the head. An engineer with a computer is far more productive in terms of designs generated than one with a pad of paper (and in turn more productive than one without any tools whatsoever).
We’ve merely gotten all of the obvious low-hanging recursive improvements. From exporting calculations out of our heads (abacus, paper and pencil, slide rule, computer) to better organizational systems, we’ve improved our ability to turn our thoughts into useful work.
Not all of the human thought process goes on inside the head. An engineer with a computer is far more productive in terms of designs generated than one with a pad of paper (and in turn more productive than one without any tools whatsoever).
You are right, and it’s interesting to consider this quote from the article in that light:
At some point our AGI will be just as smart as the world’s AI researchers, but we can hardly expect to start seeing super-fast AI progress at that point, because the world’s AI researchers haven’t produced super-fast AI progress.
What would a group of human AI researchers capable of completely reimplementing a copy of themselves be able to do? I’m assuming for example’s sake that if an AGI could do it, so could the human researchers it is on par with. That’s actually a tremendous amount of power for either an AGI or a group of humans. As it is today we’ve been lucky to discover modern medicine and farming techniques and find fossil fuels just to boost the total population and scavenge the tiny percentage of scientists and engineers off of it. We won’t be able to double the number of high-quality AI researchers every 50 years for long on this rock without an actual improvement in the rate of growth of AI research. The point where any system acquires the ability to be self-sustaining seems like it would have to be an inflection point of greatly increased growth.
This means that after 1 year of 1.25% growth in capability it is fundamentally more capable at improving itself (101.25% as capable, in fact). Assuming that the growth rate is directly proportional to its current capability in the next year instead of 1.25% growth it will experience 2.5% growth.
I’m confused. If “growth rate is directly proportional to current capability”, then why would you ever stop having 1.25% growth? You’d just be seeing 1.25% of an increasingly larger number.
I’m confused. If “growth rate is directly proportional to current capability”, then why would you ever stop having 1.25% growth? You’d just be seeing 1.25% of an increasingly larger number.
You’re right, I stated that incorrectly. In my example the growth rate and the capability were both increasing with the justification that an improvement in the ability to improve itself would lead to an increasing growth rate over time. For instance if each (u,d) pair of improvement and difficulty is ordered correctly (luckily?) it is likely that solving enough initial problems will decrease the difficulty of future improvements and lead to an increase in the growth rate. Instead of leading to diminishing returns as the low-hanging fruit is discovered, the low-hanging fruit will turn previously hard problems into low-hanging fruit.
Right, there are two competing forces here… diminishing returns, and the fact that early wins may help with later wins. I don’t think it’s obvious that one predominates.
Human cognition is fundamentally limited by biological drives, tiredness, boredom, limited working memory, and low precision. Humans can’t recursively improve their own minds and so our exponential growth rate is constant. An AI’s improvement rate will not be constant and so I think it is unreasonable to estimate the rate of exponential growth of an AI based on how long it takes human researchers to develop an AI with an equivalent level of ability.
For instance, let’s say that in 2080 we develop an AI capable of designing itself from scratch in exactly 80 years. However, the AI does not have to recreate itself from scratch and presumably it does not need to wait 80 years to improve itself. For example let’s just assume that the AI can upgrade itself once per year and the effects are cumulative. Let’s also assume that it can direct its entire output into improving itself. This means that after 1 year of 1.25% growth in capability it is fundamentally more capable at improving itself (101.25% as capable, in fact). Assuming that the growth rate is directly proportional to its current capability in the next year instead of 1.25% growth it will experience 2.5% growth. The year after that, 6%. The AI would double in capability in 5 years. Practically, the development of hardware will be a hard limit on the rapidity that an AI can improve itself, so 5 years may be a stretch.
If we factor in Moore’s Law we could talk about an AI that reaches a point in 2080 such that even with Moore’s Law it will take another 80 years to reproduce itself from scratch, e.g. that nearly half of the entire workload would be done in 2158 and 2159. The growth of such an AI would be much slower because it would have logarithmically fewer resources available in 2080 compared to 2160 (and in fact by 2161 it would be capable of doubling every year). Such an AI would have to be very weak compared to human researchers to require such a long time of 100%/18-month growth in computing power, so I don’t think it’s very meaningful to try to scale the problem this way. After all, the AI research field has not been doubling in capability every 18 months since the 1950s. So it makes sense to talk about an AI in 2080 that, if ran on the hardware of 2080, would take another 80 years to develop itself from scratch. I am fairly confident that allowing it to self-improve on improving hardware would lead to hard takeoff within a period of a few years.
Not all of the human thought process goes on inside the head. An engineer with a computer is far more productive in terms of designs generated than one with a pad of paper (and in turn more productive than one without any tools whatsoever).
We’ve merely gotten all of the obvious low-hanging recursive improvements. From exporting calculations out of our heads (abacus, paper and pencil, slide rule, computer) to better organizational systems, we’ve improved our ability to turn our thoughts into useful work.
If we find another big improvement, it will seem obvious in retrospect.
You are right, and it’s interesting to consider this quote from the article in that light:
What would a group of human AI researchers capable of completely reimplementing a copy of themselves be able to do? I’m assuming for example’s sake that if an AGI could do it, so could the human researchers it is on par with. That’s actually a tremendous amount of power for either an AGI or a group of humans. As it is today we’ve been lucky to discover modern medicine and farming techniques and find fossil fuels just to boost the total population and scavenge the tiny percentage of scientists and engineers off of it. We won’t be able to double the number of high-quality AI researchers every 50 years for long on this rock without an actual improvement in the rate of growth of AI research. The point where any system acquires the ability to be self-sustaining seems like it would have to be an inflection point of greatly increased growth.
I’m confused. If “growth rate is directly proportional to current capability”, then why would you ever stop having 1.25% growth? You’d just be seeing 1.25% of an increasingly larger number.
You’re right, I stated that incorrectly. In my example the growth rate and the capability were both increasing with the justification that an improvement in the ability to improve itself would lead to an increasing growth rate over time. For instance if each (u,d) pair of improvement and difficulty is ordered correctly (luckily?) it is likely that solving enough initial problems will decrease the difficulty of future improvements and lead to an increase in the growth rate. Instead of leading to diminishing returns as the low-hanging fruit is discovered, the low-hanging fruit will turn previously hard problems into low-hanging fruit.
Right, there are two competing forces here… diminishing returns, and the fact that early wins may help with later wins. I don’t think it’s obvious that one predominates.