I didn’t get that impression, after reading this within the context of the rest of the sequence. Rather, it seems like a warning about the importance of foresight when planning a transhuman future. The “clever fool” in the story (presumably a parody of the author himself) released a self-improving AI into the world without knowing exactly what it was going to do or planning for every contingency.
Basically, the moral is: don’t call the AI “friendly” until you’ve thought of every single last thing.
Presumably, your own personal verthandi(s) would have other hobbies, because you would want them to.