Goals don’t need to be specified explicitly, all that’s required is that it’s true that future agent has goals similar to original agent’s. However, since construction of future agent is part of original agent’s behavior that contributes to original agent’s goals (by my definition), it doesn’t necessarily make sense for the agent to prove that goals are preserved, it just needs to be true that they are (to some extent), more as an indication that we understand original agent correctly than a consideration that it takes into account.
For example, original agent might be bad at accomplishing its “normative” goals, and even though it’s true that it optimizes the environment to some extent, it doesn’t do it very well, so definition of “normative” goals (related in my definition to actual effect on environment) doesn’t clearly derive from original agent’s construction, except specifically for its tendency to construct future agents with certain goals (assuming it can do that true to the “normative” goals), in which case future agent’s goals (as parameters of design) are closer to the mark (actual effect on environment and “normative” goals) than original agent’s (as parameters of design).
However, since construction of future agent is part of original agent’s behavior that contributes to original agent’s goals (by my definition), it doesn’t necessarily make sense for the agent to prove that goals are preserved, it just needs to be true that they are (to some extent), more as an indication that we understand original agent correctly than a consideration that it takes into account.
(Emphasis added.) For that sense of “specify”, I agree.
Goals don’t need to be specified explicitly, all that’s required is that it’s true that future agent has goals similar to original agent’s. However, since construction of future agent is part of original agent’s behavior that contributes to original agent’s goals (by my definition), it doesn’t necessarily make sense for the agent to prove that goals are preserved, it just needs to be true that they are (to some extent), more as an indication that we understand original agent correctly than a consideration that it takes into account.
For example, original agent might be bad at accomplishing its “normative” goals, and even though it’s true that it optimizes the environment to some extent, it doesn’t do it very well, so definition of “normative” goals (related in my definition to actual effect on environment) doesn’t clearly derive from original agent’s construction, except specifically for its tendency to construct future agents with certain goals (assuming it can do that true to the “normative” goals), in which case future agent’s goals (as parameters of design) are closer to the mark (actual effect on environment and “normative” goals) than original agent’s (as parameters of design).
(Emphasis added.) For that sense of “specify”, I agree.