is if it turns out that advanced narrow-AIs manage to generate more utility than humans know what to do with initially.
I find it not just likely but borderline certain. Ubiquitous, explicitly below-human narrow AI has a tremendous potential that we act blind to, focusing on superhuman AI. Creating superhuman, self-improving AGI, while extremely dangerous, is also an extremely hard problem (in the same realm as dry nanotech or FTL travel). Meanwhile, creating brick-dumb but ubiquitous narrow AI and then mass producing it to saturation is easy. It could be done today, its just a matter of market forces and logistics.
It might very well be the case that once the number of narrow-AI systems, devices and drones passes certain threshold (say, it becomes as ubiquitous, cheap and accessible as cars, but not yes as much as smartphones) we would enter a weaker form of post-scarcity and have no need to create AI gods.
Im confused by this post. It might be that I lack the necessary knowledge or reading apprehension, but the post seems to dance around the actual SELF-improvement (AI improving itself, Theseus Ship Style), and refocuses on improvement iteration (AI creating another AI).
Consider a human example. In the last few years, I learned Rationalist and Mnemonic techniques to self-improve my thinking. I also fathered a child, raised it, and taught it basic rationalist and mnemonic tricks, making it an independent and only vaguely aligned agent potentially more powerful than I am.
The post seems to focus on the latter option.