To your first point:
If two agents had identical utility functions, except for one or two small tweaks, it feels reasonable to ask “Which of these agents got more utility/actualized it’s values more?” This might be hard to actually formalize. I’m mostly running on the intuition that sometimes humans that are pretty similar might look at another and say, “It seems like this other person is getting more of what they want than I am.”
Fair enough. Though in this case the valuing fairness is a big enough change that it makes a difference to how the agents act, so it’s not clear that it can be glossed over so easily.
To your first point: If two agents had identical utility functions, except for one or two small tweaks, it feels reasonable to ask “Which of these agents got more utility/actualized it’s values more?” This might be hard to actually formalize. I’m mostly running on the intuition that sometimes humans that are pretty similar might look at another and say, “It seems like this other person is getting more of what they want than I am.”
Fair enough. Though in this case the valuing fairness is a big enough change that it makes a difference to how the agents act, so it’s not clear that it can be glossed over so easily.