[Question] Can you define “utility” in utilitarianism without using words for specific human emotions?

I’m trying to get a slightly better grasp of utilitarianism as it is understood in rat/​EA circles, and here’s my biggest confusion at the moment.

How do you actually define “utility”, not in the sense of how to compute it, but in the sense of specifying wtf are you even trying to compute? People talk about “welfare”, “happiness” or “satisfaction”, but those are intrinsically human concepts and most people seem to assume non-human agents at least in theory can have utility. So let’s taboo those words, and all other words referring to specific human emotions (you can still use the word “human” or “emotion” itself if you have to). Caveats:

  1. Your definition should exclude things like AlphaZero or a $50 robot toy following a lights spot.

  2. If you use the word “sentient” or synonyms, provide at least some explanation of what do you mean by it.

If the answer is different for different flavors of utilitarianism, please clarify which one(s) your definition(s) apply to.

Alternatively, if “utility” is defined in human terms by design, can you explain what is the supposed process for mapping internal states of those non-human agents into human terms?