I think we have radically different ideas of what “moderately smarter” means, and also whether just “smarter” is the only thing that matters.
I’m moderately confident that “as smart as the smartest humans, and substantially faster” would be quite adequate to start a self-improvement chain resulting in AI that is both faster and smarter.
Even the top-human smarts and speed would be enough, if it could be instantiated many times.
I also expect humans to produce AGI that is smarter than us by more than GPT-4 is smarter than GPT-3, quite soon after the first AGI that is as “merely” as smart as us. I think the difference between GPT-3 and GPT-4 is amplified in human perception by how close they are to human intelligence. In my expectation, neither is anywhere near what the existing hardware is capable of, let alone what future hardware might support.
The question is not whether superintelligence is possible, or whether recursive self-improvement can get us there. The question is whether widespread automation will have already transformed the world before the first superintelligence. See point 4.
I think we have radically different ideas of what “moderately smarter” means, and also whether just “smarter” is the only thing that matters.
I’m moderately confident that “as smart as the smartest humans, and substantially faster” would be quite adequate to start a self-improvement chain resulting in AI that is both faster and smarter.
Even the top-human smarts and speed would be enough, if it could be instantiated many times.
I also expect humans to produce AGI that is smarter than us by more than GPT-4 is smarter than GPT-3, quite soon after the first AGI that is as “merely” as smart as us. I think the difference between GPT-3 and GPT-4 is amplified in human perception by how close they are to human intelligence. In my expectation, neither is anywhere near what the existing hardware is capable of, let alone what future hardware might support.
The question is not whether superintelligence is possible, or whether recursive self-improvement can get us there. The question is whether widespread automation will have already transformed the world before the first superintelligence. See point 4.