I think you are assuming that “utility” means something like “happiness”. That is not the only possible way to use the word.
If there is a term in my utility function (to whatever extent I have a utility function) for accurate knowledge, then there can be situations indistinguishable to me to which I assign different utility, because I may be unable to tell whether some bit of my “knowledge” is actually accurate or not.
I think maybe you think there is something impossible or incoherent about this, perhaps on the grounds that it’s absurd to say you care about the difference between X and Y when you cannot actually discern the difference between X and Y. I disagree. If you tell me that you are either going to shoot me in the head or shoot me in the head and then murder a million other people, I prefer the former even though, being dead, I will be unable to tell whether you’ve murdered the million others or not. If you tell me that you will either slap me in the face and then shoot me dead, or else shoot me dead and then murder a million others, and if I believe you, then I will gladly take that slap in the face. If you tell me that you will either slap me in the face, convince me that you aren’t going to murder anyone else, kill me, and then murder a million others, or else just kill me and the million others, I will not take the slap in the face even if I am confident that you could convince me. (Er, unless I think that the time you take convincing me makes it more likely that somehow you never actually get to murder me.)
My utility function (to whatever extent I have a utility function) maps world-states to utilities, not my-experience-states to utilities. There is of course another function that maps my-experience states to utilities, or maybe to something like probability distributions over utilities (it goes: experience-state → my beliefs about the state of the world → my estimate of my utility function), but it isn’t the same function and it isn’t what I care about even if in some sense it’s necessarily what I act on: if you propose to change the world-state and the experience-state in ways that don’t match, then my preferences track what you propose to do to the world-state, not the experience-state.
(Of course my experiences are among the things I care about, and I care about some of them a lot. If you threaten to make me wrongly think you have murdered my family then that’s a very negative outcome for me and I will try hard to prevent it. But if I have to choose between that and having my family actually murdered, I pick the former.)
I think you are assuming that “utility” means something like “happiness”. That is not the only possible way to use the word.
If there is a term in my utility function (to whatever extent I have a utility function) for accurate knowledge, then there can be situations indistinguishable to me to which I assign different utility, because I may be unable to tell whether some bit of my “knowledge” is actually accurate or not.
I think maybe you think there is something impossible or incoherent about this, perhaps on the grounds that it’s absurd to say you care about the difference between X and Y when you cannot actually discern the difference between X and Y. I disagree. If you tell me that you are either going to shoot me in the head or shoot me in the head and then murder a million other people, I prefer the former even though, being dead, I will be unable to tell whether you’ve murdered the million others or not. If you tell me that you will either slap me in the face and then shoot me dead, or else shoot me dead and then murder a million others, and if I believe you, then I will gladly take that slap in the face. If you tell me that you will either slap me in the face, convince me that you aren’t going to murder anyone else, kill me, and then murder a million others, or else just kill me and the million others, I will not take the slap in the face even if I am confident that you could convince me. (Er, unless I think that the time you take convincing me makes it more likely that somehow you never actually get to murder me.)
My utility function (to whatever extent I have a utility function) maps world-states to utilities, not my-experience-states to utilities. There is of course another function that maps my-experience states to utilities, or maybe to something like probability distributions over utilities (it goes: experience-state → my beliefs about the state of the world → my estimate of my utility function), but it isn’t the same function and it isn’t what I care about even if in some sense it’s necessarily what I act on: if you propose to change the world-state and the experience-state in ways that don’t match, then my preferences track what you propose to do to the world-state, not the experience-state.
(Of course my experiences are among the things I care about, and I care about some of them a lot. If you threaten to make me wrongly think you have murdered my family then that’s a very negative outcome for me and I will try hard to prevent it. But if I have to choose between that and having my family actually murdered, I pick the former.)