[Question] Why does AGI need a utility function?

My intuition says that a narrow AI like DALL-E would not blow up the world, no matter how much smarter it became. It would just get really good at making pictures.

This is clearly a form of superintelligence we would all prefer, and the difference seems to me to be that DALL-E doesn’t really seem to have ‘goals’ or anything like that, it’s just a massive tool.

Why do we care to have AGI with utility functions?