Doesn’t this merely meta sidestep the issue? Now what the AI needs to do is modify itself to use pragmatic goals when modifying itself, to any level of recursion, and then the situation becomes unified with Dmytry’s concern.
So: the machine distinguishes between the act of temporarily swapping in or out an approved pragmatic goal (in order to quickly burn through some instrumental task, for example) from more serious forms of self modification.
How can you prove that in time the FAI does not reduce to an uFAI or be quickly rendered to a less steep logistic growth wrt uFAI?
Eeek! Too much FAI! I do not see how that idea has anything to do with this discussion.
So: the machine distinguishes between the act of temporarily swapping in or out an approved pragmatic goal (in order to quickly burn through some instrumental task, for example) from more serious forms of self modification.
Eeek! Too much FAI! I do not see how that idea has anything to do with this discussion.