I’m mostly going to answer assuming that there’s not some incredibly different paradigm (i.e. something as different from ML as ML is from expert systems). I do think the probability of “incredibly different paradigm” is low.
I’m also going to answer about the textbook at, idk, the point at which GDP doubles every 8 years. (To avoid talking about the post-Singularity textbook that explains how to build a superintelligence with clearly understood “intelligence algorithms” that can run easily on one of today’s laptops, which I know very little about.)
I think I roughly agree with Paul if you are talking about the textbook that tells us how to build the best systems for the tasks that we want to do. (Analogy: today’s textbook for self-driving cars.) That being said, I think that much of the improvement over time will be driven by improvements specifically in ML. (Analogy: today’s textbook for deep learning.) So we can talk about that textbook as well.
It’s a textbook that’s entirely about “finding good programs through a large, efficient search with a stringent goal”, which we currently call ML. The content may be primarily some new approach for achieving this, with neural nets being a historical footnote, or it might be entirely about neural nets (though presumably with new architectures or other changes from today). Logical induction doesn’t appear in the textbook.
Jeez, who knows. If I intuitively query my brain here, it mostly doesn’t have an answer; a thousand vs. million vs. billion years don’t really change my intuitive predictions about what I’d get done. So we can instead back it out from other estimates. Given timelines of 10^1 − 10^2 years, and, idk, ~10^6 humans working on the problem near the end, seems like I’m implicitly predicting ~10^7 human-years of effort in our actual world. Then you have to adjust for a ton of factors, e.g. my quality relative to the average, the importance of serial thinking time, the benefit that real-world humans get from AI products that I won’t get, the difficulty of exploration in thought-space by 1 person vs. 10^6 people, etc. Maybe I end up at ~10^5 years as a median estimate with wide uncertainty (especially on the right tail).
Jeez, who knows. Probably chapters / sections on how to define search spaces of programs (currently, “architectures”), efficient search algorithms within those spaces (currently, “gradient descent” and “loss functions”), how to set a stringent goal (currently, “what dataset to use”).
I’m mostly going to answer assuming that there’s not some incredibly different paradigm (i.e. something as different from ML as ML is from expert systems). I do think the probability of “incredibly different paradigm” is low.
I’m also going to answer about the textbook at, idk, the point at which GDP doubles every 8 years. (To avoid talking about the post-Singularity textbook that explains how to build a superintelligence with clearly understood “intelligence algorithms” that can run easily on one of today’s laptops, which I know very little about.)
I think I roughly agree with Paul if you are talking about the textbook that tells us how to build the best systems for the tasks that we want to do. (Analogy: today’s textbook for self-driving cars.) That being said, I think that much of the improvement over time will be driven by improvements specifically in ML. (Analogy: today’s textbook for deep learning.) So we can talk about that textbook as well.
It’s a textbook that’s entirely about “finding good programs through a large, efficient search with a stringent goal”, which we currently call ML. The content may be primarily some new approach for achieving this, with neural nets being a historical footnote, or it might be entirely about neural nets (though presumably with new architectures or other changes from today). Logical induction doesn’t appear in the textbook.
Jeez, who knows. If I intuitively query my brain here, it mostly doesn’t have an answer; a thousand vs. million vs. billion years don’t really change my intuitive predictions about what I’d get done. So we can instead back it out from other estimates. Given timelines of 10^1 − 10^2 years, and, idk, ~10^6 humans working on the problem near the end, seems like I’m implicitly predicting ~10^7 human-years of effort in our actual world. Then you have to adjust for a ton of factors, e.g. my quality relative to the average, the importance of serial thinking time, the benefit that real-world humans get from AI products that I won’t get, the difficulty of exploration in thought-space by 1 person vs. 10^6 people, etc. Maybe I end up at ~10^5 years as a median estimate with wide uncertainty (especially on the right tail).
Jeez, who knows. Probably chapters / sections on how to define search spaces of programs (currently, “architectures”), efficient search algorithms within those spaces (currently, “gradient descent” and “loss functions”), how to set a stringent goal (currently, “what dataset to use”).