Leaving aside the question of whether Tool AI as you describe it is possible until I’ve thought more about it:
The idea of a “self-improving algorithm” intuitively sounds very powerful, but does not seem to have led to many “explosions” in software so far (and it seems to be a concept that could apply to narrow AI as well as to AGI).
Looking to the past for examples is a very weak heuristic here, since we have never dealt with software that could write code at a better than human level before. It’s like saying, before the invention of the internal combustion engine, “faster horses have never let you cross oceans before.” Same goes for the assumption that strong AI will resemble extremely narrow AI software tools that already exist in specific regards. It’s evidence, but it’s very weak evidence, and I for one wouldn’t bet on it.
Leaving aside the question of whether Tool AI as you describe it is possible until I’ve thought more about it:
Looking to the past for examples is a very weak heuristic here, since we have never dealt with software that could write code at a better than human level before. It’s like saying, before the invention of the internal combustion engine, “faster horses have never let you cross oceans before.” Same goes for the assumption that strong AI will resemble extremely narrow AI software tools that already exist in specific regards. It’s evidence, but it’s very weak evidence, and I for one wouldn’t bet on it.