I feel really weird about this article, because I at least spiritually agree with the conclusion and my lower bound on AGI extinction scenario is “automated industry grows 40%/y and at first everybody becomes filthy rich and in next decade necrosphere devours everything” and this scenario seems to be weirdly underdiscussed relatively to “AGI develops self-sustaining supertech and kills everyone within one month” by both skeptics and doomers.
Thus said, I disagree with almost everything else.
You underestimate effect of RnD automation because you seem to ignore that a lot of bottlenecks in RnD and adoption of results are actually human bottlenecks. Like, even if we automate everything in RnD which is not free scientific creativity, we are still bottlenecked by facts that humans can keep in working memory only 10 things and that multitasking basically doesn’t exist and that you can perform several hours of hard intellectual work per day maximum and that you need 28 years to grow scientist and that there is only so many people actually capable to produce science and that science progresses one funeral at time, et cetera, et cetera. The same with adoption: even if innovation exists, there is only so many people actually capable to understand it and implement while integrating into existing workflow in maintainable way and they have only so many productive working hours per day.
Even if we dial every other productivity factor into infinity, there is a still hard cap on productivity growth from inherent human limitations and population. When we achieve AI automation, we substitute these limitations with what is achievable inside physical limitations.
Your assumption “everybody thinks that RnD is going to be automated first because RnD is mostly abstract reasoning” is wrong, because actual reason why RnD is going to be automated relatively early is because it’s very valuable, especially if you count opportunity costs. Every time you pay AI researcher money to automate narrow task, you waste money which could be spent on creating an army on 100k artificial researchers who could automate 100k tasks. I think this holds even if you assume pretty long timelines, because w.r.t. of mentioned human bottlenecks everything you can automate before RnD automation is going to be automated rather poorly (and, unlike science, you can’t run self-improvement from here).
I feel really weird about this article, because I at least spiritually agree with the conclusion and my lower bound on AGI extinction scenario is “automated industry grows 40%/y and at first everybody becomes filthy rich and in next decade necrosphere devours everything” and this scenario seems to be weirdly underdiscussed relatively to “AGI develops self-sustaining supertech and kills everyone within one month” by both skeptics and doomers.
Thus said, I disagree with almost everything else.
You underestimate effect of RnD automation because you seem to ignore that a lot of bottlenecks in RnD and adoption of results are actually human bottlenecks. Like, even if we automate everything in RnD which is not free scientific creativity, we are still bottlenecked by facts that humans can keep in working memory only 10 things and that multitasking basically doesn’t exist and that you can perform several hours of hard intellectual work per day maximum and that you need 28 years to grow scientist and that there is only so many people actually capable to produce science and that science progresses one funeral at time, et cetera, et cetera. The same with adoption: even if innovation exists, there is only so many people actually capable to understand it and implement while integrating into existing workflow in maintainable way and they have only so many productive working hours per day.
Even if we dial every other productivity factor into infinity, there is a still hard cap on productivity growth from inherent human limitations and population. When we achieve AI automation, we substitute these limitations with what is achievable inside physical limitations.
Your assumption “everybody thinks that RnD is going to be automated first because RnD is mostly abstract reasoning” is wrong, because actual reason why RnD is going to be automated relatively early is because it’s very valuable, especially if you count opportunity costs. Every time you pay AI researcher money to automate narrow task, you waste money which could be spent on creating an army on 100k artificial researchers who could automate 100k tasks. I think this holds even if you assume pretty long timelines, because w.r.t. of mentioned human bottlenecks everything you can automate before RnD automation is going to be automated rather poorly (and, unlike science, you can’t run self-improvement from here).