Let’s assume AGI that’s on par with the world AI research community is reached in 2080 (LW’s median “singularity” estimate in 2011). We’ll pretend AI research has only been going on since 2000, meaning 80 “standard research years” of progress have gone in to the AGI’s software. So at the moment our shiny new AGI is fired up, u = 80, and it’s doing research at the rate of one “human AGI community research year” per year, so du/dt = 1. That’s an effective rate of return on AI software progress of 1 / 80 = 1.3%, giving a software quality doubling time of around 58 years.
The ancient Greeks discovered a lot of maths and logic that is the distant precursor of what is needed to build an AI. Where you draw your starting point, in the year 2000, is an arbitrary point. It might roughly correspond to when research started to use the word “AI” in the title, but that has little to do with anything. If we assume that AI output is linear in research, with some arbitrary date chosen as zero, then the rate of progress depends entirely on choice of zero. Many computer technologies have shown dramatic progress in far less than 58 years. Often doing things well is only slightly harder than doing them at all. Often the first piece of software that can do a task at all can do it far faster than a human.
(eg image recognition to a particular accuracy) If we consider the task of an AI researcher trying to make improvements in a piece of AI code, there could be a small change in quality between almost all changes making the system worse, and almost all changes being improvements. And this system could be very fast on human timescales.
The ancient Greeks discovered a lot of maths and logic that is the distant precursor of what is needed to build an AI. Where you draw your starting point, in the year 2000, is an arbitrary point. It might roughly correspond to when research started to use the word “AI” in the title, but that has little to do with anything. If we assume that AI output is linear in research, with some arbitrary date chosen as zero, then the rate of progress depends entirely on choice of zero. Many computer technologies have shown dramatic progress in far less than 58 years. Often doing things well is only slightly harder than doing them at all. Often the first piece of software that can do a task at all can do it far faster than a human.
(eg image recognition to a particular accuracy) If we consider the task of an AI researcher trying to make improvements in a piece of AI code, there could be a small change in quality between almost all changes making the system worse, and almost all changes being improvements. And this system could be very fast on human timescales.