Ah. I think the inference they may take is that a paperclip maximizer is perfectly rational/coherent, as is a staple maximizer and so on. They don’t think there are additional constraints as you suggest, beyond minimal ones like not having an “especially stupid” goal, such as “die as fast as possible”.
Ah. I think the inference they may take is that a paperclip maximizer is perfectly rational/coherent, as is a staple maximizer and so on. They don’t think there are additional constraints as you suggest, beyond minimal ones like not having an “especially stupid” goal, such as “die as fast as possible”.