It is possible that even the specification of a paperclip is unstable under recursive improvement but somewhat less likely. Postulating agents that don’t even know what a paperclip is seems less useful as a tool for constructing counterfactuals.
It is, however, useful for thinking about recursive stability in general, and thinking about designing agents to have stable goal systems.
It is, however, useful for thinking about recursive stability in general, and thinking about designing agents to have stable goal systems.