there’s no a priori reason to care about what other “agents” present in your causal past (light cone!) “cared” about
Nor is there an a priori reason for an AI to exist, for it to understand what ‘paperclips’ are, let alone for it to self-improve through learning like a human child does, absorb human languages, and upgrade itself to the extent necessary to take over the world.
I suspect that any team of scientists or engineers with the knowledge and capability required to build an AGI with at least human-infant level cognitive capacity and the ability to learn human language will understand that making the AI’s goal system dynamic is not only advantageous, but is necessitated in practice by the cognitive capabilities required for understanding human language.
The idea of a paperclip maximizer taking over the world is a mostly harmless absurdity, but it also detracts from serious discussion.
Nor is there an a priori reason for an AI to exist, for it to understand what ‘paperclips’ are, let alone for it to self-improve through learning like a human child does, absorb human languages, and upgrade itself to the extent necessary to take over the world.
I suspect that any team of scientists or engineers with the knowledge and capability required to build an AGI with at least human-infant level cognitive capacity and the ability to learn human language will understand that making the AI’s goal system dynamic is not only advantageous, but is necessitated in practice by the cognitive capabilities required for understanding human language.
The idea of a paperclip maximizer taking over the world is a mostly harmless absurdity, but it also detracts from serious discussion.