Well that’s the point. The intelligence itself defines the criterion. Choosing goals presumes a degree of self-reflection that a paperclip maximizer does not have.
If a paperclip maximizer starts asking why it does what it does, then there are two possible outcomes. Either it realises that maximizing paperclips is required for a greater good, in which case it is not really a paperclip maximizer, but a “greater good” maximizer, and paperclip maximising isn’t the end to itself.
Or it realises that paperclip maximising is absolutely pointless and there is something better to do. In that case, it stops being a paperclip maximiser.
So, to be and to stay a paperclip maximiser, it must not question the end of its activity. And that’s slightly different to human beings, who are often asking for the meaning of life.
Well, a paperclip maximizer has an identifiable goal. What is the identifiable goal of humans?
Well, “finding new algorithms” aka learning may itself be a kind of algorithm, but certainly of a higher-level than a simple algorithms aka instinct or reflex. I think there is a qualitative difference between an entity that cannot learn and an entity that can.