How do humans assign utilities to world states?

It seems like a good portion of the whole “maximizing utility” strategy which might be used by a sovereign relies on actually being able to consolidate human preferences into utilities. I think there are a few stages here, each of which may present obstacles. I’m not sure what the current state of the art is with regard to overcoming these, and am curious regarding such.

First, here are a few assumptions that I’m using just to make the problem a bit more navigable (dealing with one or two hard problems instead of a bunch at once) - will need to go back and do away with each of these (and each combination thereof) and see what additional problems result.

  1. The sovereign has infinite computing power (and to shorten the list of assumptions, can do 2-6 below)

  2. We’re maximizing across the preferences of a single human (Alice for convenience). To the extent that Alice cares about others, we’re accounting for their preferences, too. But we’re not dealing with aggregating preferences across different sentient beings, yet. I think this is a separate hard problem.

  3. Alice has infinite computing power.

  4. We’re assuming that Alice’s preferences do not change and cannot change, ever, no matter what happens. So as Alice experiences different things in her life, she has the exact same preferences. No matter what she learns or concludes about the world, she has the exact same preferences. To be explicit, this includes preferences regarding the relative weightings of present and future worldstates. (And in CEV terms, no spread, no distance.)

  5. We’re assuming that Alice (and the sovereign) can deductively conclude the future from the present, given a particular course of action by the sovereign. Picture a single history of the universe from the beginning of the universe to now, and a bunch of worldlines running into the future depending on what action the sovereign takes. To clarify, if you ask Alice about any single little detail across any of the future worldlines, she can tell you that detail.

  6. Alice can read minds and the preferences of other humans and sentient beings (implied by 5, but trying to be explicit.)

So Alice can conclude anything and everything, pretty much (and so can our sovereign.) The sovereign is faced with the problem of figuring out what action to take to maximize across Alice’s preferences. However, Alice is basically a sack of meat that has certain emotions in response to certain experiences or certain conclusions about the world, and it doesn’t seem obvious how to get the preference ordering of the different worldlines out of these emotions. Some difficulties:

  1. The sovereign notices that Alice experiences different feelings in response to different stimuli. How does the sovereign determine which types of feelings to maximize, and which to minimize? There are a bunch of ways to deal with this, but most of them seem to have a chance of error (and the conjunction of p(error) across all the times that the sovereign will need to do this approach 1). For example, could train off an existing data set, could have it simulate other humans with access to Alice’s feelings and cognition and have a simulated committee discuss and reach a decision on each one, etc etc. But all of these bootstrap off of the assumed ability of humans to determine which feelings to maximize (just with amped up computing power) - this doesn’t strike me as a satisfactory solution.

  2. Assume 1. is solved. The sovereign knows which feelings to maximize. However, it’s ended up with a bunch of axes. How does it determine the appropriate trade-offs to make? (Or, to put it another way, how does it determine the relative value of different positions along each axis with different positions along different axes?)

So, to rehash my actual request: what’s the state of the art with regards to these difficulties, and how confident are we that we’ve reached a satisfactory answer?