[Question] Does a LLM have a utility function?

There’s a lot of discussion and research into AI alignment, almost always about variants of how to define/​create a utility function (or meta-function, if it changes over time) that is actually aligned with … something. That something is at least humanity’s survival, but often something like flourishing or other semi-abstract goal. Oops, that’s not my question for today.

My question for today is whether utility functions are actually part of the solution at all. Humans don’t have them, the most interesting spurs toward AI don’t have them. Maybe anything complicated enough to be called AGI doesn’t have one (or at least doesn’t have a simple, concrete, consistent one).