But in most cases I expect that the bottleneck is being able to perform a task *at all*; if they can then they’ll almost always be able to do it with a negligible proportion of the world’s compute.
I like your framing, and particularly like this piece of it. The thing that I’ve been trying to convince people of after doing a deep-dive research project over several months on this is… GPT-4 is close to the threshold of being able to do recursive self-improvement. I think that GPT-5 will be over that threshold. If not, then I’m nearly certain a GPT-6 would be. And I think that this threshold is critical not in a FOOM-within-days way, but in a human-assisted gradually-accelerating-self-improvement-over-months culminating in something roughly like 100x improvement over 6 − 18 months, and then it’s a crazy singularity world and I don’t know how to predict it will go other than ‘uhoh’.
If I’m right that we’re like 1 − 5 years away from this crazy RSI process getting started, then it would sure be nice if humanity would coordinate a bit better about how to deal with this scenario.
I like your framing, and particularly like this piece of it. The thing that I’ve been trying to convince people of after doing a deep-dive research project over several months on this is… GPT-4 is close to the threshold of being able to do recursive self-improvement. I think that GPT-5 will be over that threshold. If not, then I’m nearly certain a GPT-6 would be. And I think that this threshold is critical not in a FOOM-within-days way, but in a human-assisted gradually-accelerating-self-improvement-over-months culminating in something roughly like 100x improvement over 6 − 18 months, and then it’s a crazy singularity world and I don’t know how to predict it will go other than ‘uhoh’.
If I’m right that we’re like 1 − 5 years away from this crazy RSI process getting started, then it would sure be nice if humanity would coordinate a bit better about how to deal with this scenario.