Oh, when one is trying to talk about that many orders of magnitude, they are just doing “vibe marketing” :-) In reality, we just can’t extrapolate this far. It’s quite possible, but we can’t really know...
But no, it’s not the human level AI, the AI capability is what is changing the fastest in this scenario, the actual reason why it might go that far (and even further) is that a human level AI is supposed to rapidly become superhuman (if it stays at human level then what is all this extra AI research even doing?), and then even more superhuman, and then even more superhuman, and so on, and if there is some saturation at some point it is usually assumed to be very far above the human level.
If one has a lot of AI research done by artificial AI researchers, one would have to impose some very strong artificial constraints to prevent that research from improving the strength of artificial AI researchers. The classical self-improvement scenario is that artificial AI researchers making much better and much stronger artificial AI researchers is the key focus of AI research, and that this “artificial AI researchers making much better and much stronger artificial AI researchers” step iterates again and again.
Logically, I agree. Intuitively, I feel suspect that it just won’t happen. But, intuition on such alien things should not be a guide, so I fully support some attempt to slow down the takeoff.
Oh, when one is trying to talk about that many orders of magnitude, they are just doing “vibe marketing” :-) In reality, we just can’t extrapolate this far. It’s quite possible, but we can’t really know...
But no, it’s not the human level AI, the AI capability is what is changing the fastest in this scenario, the actual reason why it might go that far (and even further) is that a human level AI is supposed to rapidly become superhuman (if it stays at human level then what is all this extra AI research even doing?), and then even more superhuman, and then even more superhuman, and so on, and if there is some saturation at some point it is usually assumed to be very far above the human level.
If one has a lot of AI research done by artificial AI researchers, one would have to impose some very strong artificial constraints to prevent that research from improving the strength of artificial AI researchers. The classical self-improvement scenario is that artificial AI researchers making much better and much stronger artificial AI researchers is the key focus of AI research, and that this “artificial AI researchers making much better and much stronger artificial AI researchers” step iterates again and again.
Logically, I agree. Intuitively, I feel suspect that it just won’t happen. But, intuition on such alien things should not be a guide, so I fully support some attempt to slow down the takeoff.