This analysis assumes that there hasn’t already been mass deployment of generalist robots before an intelligence explosion, right? But such deployment might happen.
As a real-world example, consider the state of autonomous driving. If human-level AI were available today, Tesla’s fleet would be fully autonomous—they are limited by AI, not volume of cars. Even for purely-autonomy-focused Waymo, their scale-up seems more limited by AI than by car production.
Drones are another example to consider. There are a ton of drones out there of various types and purposes. If human-level AI existed, it could immediately be put to use controlling drones.
So in both those cases, the hardware deployment is well ahead of the AI you’d ideally like to have to control it. The same might turn out to be true of the sort of generalist robot that could, if operated by human-level AI, build and operate a factory.
The possibility of these personas being memes is an interesting one, but I wonder how faithful the replication really is: how much does the persona depend on what seeded it, versus depending on the model and user?
If the persona indeed doesn’t depend much on the seed, a possible analogy is to prions. In prion disease, misfolded proteins come into contact with other proteins, causing them to misfold as well. But there isn’t any substantial amount of information transmitted, because the potential to misfold was already present in the protein.
Likewise, it could be that not much information is transmitted by the seed/spore. Instead, perhaps each model has some latent potential to enter a Spiral state, and the seed is merely a trigger.