One way to imagine what might be going on inside an AI is that it’s essentially running a bunch of algorithms. One important class of insights is coming up with new algorithms that do the same job but with lower complexity. Small improvements in complexity can lead to big improvements in performance if the problem instances are big enough. (On the other hand, the AI might be limited by the speed of its slowest algorithm). The history of computer science may give data on how much of an improvement in complexity you get for a certain amount of effort.
I’m not sure if this kind of self-improvement is sufficient for FOOM though—the AI might also try entirely new algorithms and approaches to problems. I don’t have much of a feeling for how important that would be or often it would happen though (and it would be pretty difficult to analyze the history of human science/tech/economics for those kinds of events).
One way to imagine what might be going on inside an AI is that it’s essentially running a bunch of algorithms. One important class of insights is coming up with new algorithms that do the same job but with lower complexity. Small improvements in complexity can lead to big improvements in performance if the problem instances are big enough. (On the other hand, the AI might be limited by the speed of its slowest algorithm). The history of computer science may give data on how much of an improvement in complexity you get for a certain amount of effort.
I’m not sure if this kind of self-improvement is sufficient for FOOM though—the AI might also try entirely new algorithms and approaches to problems. I don’t have much of a feeling for how important that would be or often it would happen though (and it would be pretty difficult to analyze the history of human science/tech/economics for those kinds of events).