Here, have a mathematical perspective that conflates beliefs and values:
Suppose that some agent is given a choice between A and B. A is an apple. B is an N chance of a banana, otherwise nothing. The important thing here is the ambivalence equation: iff U(apple) = N*U(banana), the agent is ambivalent between the apple and the banana. Further suppose that N is 50%, and the agent likes bananas twice as much as it likes apples. In this case, at least, the agent might as well modify itself to believe that N is 20% and to like bananas five times as much as apples.
Now, doing this might result in inconsistencies elsewhere, but I’m guessing that a rational agent will be able to apply transformations to its beliefs and values—but only both simultaneously—so as to preserve expected utility given actions.
I think a more concrete example of the beliefs-values meld is anthropic reasoning. Take the presumptuous friend: you, the presumptuous philosopher’s presumptuous friend, have just been split into 1,000,001 branches, and each of those branches has been placed in a hotel room, 1,000,000 rooms being in one hotel, and the other room being in another hotel. Is the probability that you’re in the small hotel 50%, negligible, or something in between? Well, that depends: if you care about each of your branches equally, it’s negligible; if you care about each hotel equally, it’s 50%.
Here, have a mathematical perspective that conflates beliefs and values:
Suppose that some agent is given a choice between A and B. A is an apple. B is an N chance of a banana, otherwise nothing. The important thing here is the ambivalence equation: iff U(apple) = N*U(banana), the agent is ambivalent between the apple and the banana. Further suppose that N is 50%, and the agent likes bananas twice as much as it likes apples. In this case, at least, the agent might as well modify itself to believe that N is 20% and to like bananas five times as much as apples.
Now, doing this might result in inconsistencies elsewhere, but I’m guessing that a rational agent will be able to apply transformations to its beliefs and values—but only both simultaneously—so as to preserve expected utility given actions.
I think a more concrete example of the beliefs-values meld is anthropic reasoning. Take the presumptuous friend: you, the presumptuous philosopher’s presumptuous friend, have just been split into 1,000,001 branches, and each of those branches has been placed in a hotel room, 1,000,000 rooms being in one hotel, and the other room being in another hotel. Is the probability that you’re in the small hotel 50%, negligible, or something in between? Well, that depends: if you care about each of your branches equally, it’s negligible; if you care about each hotel equally, it’s 50%.