Suppose that different tasks take different levels of AI to do better than humans.
Firstly AI can do arithmatic, then play chess, then drive cars ect. Lets also assume that AI is much faster than humans. So imagine that AI research ability was rising from almost nothing to superhuman over the course of a year. A few months in and its inventing stuff like linear regression, impressive, but not as good as current human work on AI. There are a few months where the AI is worse than a serious team of top researchers, but better than an intern. So if you have a nieche use for AI, that can be automatically automated. The AI research AI designs a widget building AI. The humans could have made a widget building AI themselves, but so few widgets are produced, that it wasn’t worth it.
Then the AI becomes as good as a top human research team and FOOM. How crazy the world gets before foom depends on how much other stuff is automated first. Is it easier to make an AI teacher, or an AI AI researcher? Also remember that bearocratic delays are a thing, there is a difference between having an AI that does medical diagnosis in a lab, and it being used in every hospital.
That’s a good point; if a research group develops a more promising approach to AI, recursive self-improvement / capability enhancement might be one of the first things they do, before going for immediate money-making applications, because the programmers know that application area already, and they can just do it internally without going through the rigmarole of marketing, product design, etc. etc.
This is something I mentioned in the last section—if there is a significant lead time (on the order of years), then it is still totally possible for a superintelligence to appear out of nowhere and surprise everyone, even given the continuous progress model. The difference is that with discontinuous progress that outcome is essentially guaranteed, so discontinuities are informative because they give us good evidence about what takeoff speeds are possible.
Like you say, if there are no strong discontinuities we might expect lots of companies to start working hard on AIs with capability enhancement/recursive improvement, but the first AI with anything like those abilities will be the one made the quickest, so likely isn’t very good at self-improvement and gets poor returns on optimization, and the next one that comes out is a little better (I didn’t discuss the notion of Recalcitrance in Bostrom’s work, but we could model this setup as each new self-improving AI design having a shallower and shallower Recalcitrance curve), making progress continuous even with rapid capability gain. Again, if that’s not going to happen then it will be either because one project goes quiet while it gets a few steps ahead of the competition, or because there is a threshold below which improvements ‘fizzle out’ and don’t generate returns, but adding one extra component takes you over such a threshold and returns on investment explode, which takes you to the conceptual question of whether intelligence has such a threshold built in.
This is something I mentioned in the last section—if there is a significant lead time (on the order of years), then it is still totally possible for a superintelligence to appear out of nowhere and surprise everyone on the continuous progress model. The difference is that on discontinuous progress that outcome is essentially guaranteed.
Suppose that different tasks take different levels of AI to do better than humans.
Firstly AI can do arithmatic, then play chess, then drive cars ect. Lets also assume that AI is much faster than humans. So imagine that AI research ability was rising from almost nothing to superhuman over the course of a year. A few months in and its inventing stuff like linear regression, impressive, but not as good as current human work on AI. There are a few months where the AI is worse than a serious team of top researchers, but better than an intern. So if you have a nieche use for AI, that can be automatically automated. The AI research AI designs a widget building AI. The humans could have made a widget building AI themselves, but so few widgets are produced, that it wasn’t worth it.
Then the AI becomes as good as a top human research team and FOOM. How crazy the world gets before foom depends on how much other stuff is automated first. Is it easier to make an AI teacher, or an AI AI researcher? Also remember that bearocratic delays are a thing, there is a difference between having an AI that does medical diagnosis in a lab, and it being used in every hospital.
That’s a good point; if a research group develops a more promising approach to AI, recursive self-improvement / capability enhancement might be one of the first things they do, before going for immediate money-making applications, because the programmers know that application area already, and they can just do it internally without going through the rigmarole of marketing, product design, etc. etc.
This is something I mentioned in the last section—if there is a significant lead time (on the order of years), then it is still totally possible for a superintelligence to appear out of nowhere and surprise everyone, even given the continuous progress model. The difference is that with discontinuous progress that outcome is essentially guaranteed, so discontinuities are informative because they give us good evidence about what takeoff speeds are possible.
Like you say, if there are no strong discontinuities we might expect lots of companies to start working hard on AIs with capability enhancement/recursive improvement, but the first AI with anything like those abilities will be the one made the quickest, so likely isn’t very good at self-improvement and gets poor returns on optimization, and the next one that comes out is a little better (I didn’t discuss the notion of Recalcitrance in Bostrom’s work, but we could model this setup as each new self-improving AI design having a shallower and shallower Recalcitrance curve), making progress continuous even with rapid capability gain. Again, if that’s not going to happen then it will be either because one project goes quiet while it gets a few steps ahead of the competition, or because there is a threshold below which improvements ‘fizzle out’ and don’t generate returns, but adding one extra component takes you over such a threshold and returns on investment explode, which takes you to the conceptual question of whether intelligence has such a threshold built in.
This is something I mentioned in the last section—if there is a significant lead time (on the order of years), then it is still totally possible for a superintelligence to appear out of nowhere and surprise everyone on the continuous progress model. The difference is that on discontinuous progress that outcome is essentially guaranteed.