[Question] Could we set a resolution/​stopper for the upper bound of the utility function of an AI?

My thought is that, instead of an AI purely trying to maximize a utility function, we give it a goal of reaching a certain utility, and then each time it reaches that goal, it shuts off and we can decide if we want to set the utility ceiling higher. Clearly, this could have some negative effects, but it could be useful if we used it in concert with other safety precautions.

No comments.