Pretty much that, actually. It doesn’t seem too irrational, though. Upon looking at a mathematical universe where torture was decided upon as a good thing, it isn’t an obvious failure of rationality to hope that a cosmic ray flips the sign bit of the utility function of an agent in there.

The practical problem with values that care about other mathematical worlds, however, is that if the agent you built has a UDT prior over values, it’s an improvement (from the perspective of the prior) for the nosy neigbors/values that care about other worlds, to dictate some of what happens in your world (since the marginal contribution of your world to the prior expected utility looks like some linear combination of the various utility functions, weighted by how much they care about your world) So, in practice, it’d be a bad idea to build a UDT value learning prior containing utility functions that have preferences over all worlds, since it’d add a bunch of extra junk from different utility functions to our world if run.

“I’m grateful to HAL for telling me that cows have feelings. Now I’m pretty sure that, even if HAL had a glitch and mistakenly told me that cows are devoid of feeling, eating them would still be wrong.”

That’s valid reasoning. The right way to formalize it is to have two worlds, one where eating cows is okay and another where eating cows is not okay, without any “nosy preferences”. Then you receive probabilistic evidence about which world you’re in, and deal with it in the usual way.

Pretty much that, actually. It doesn’t seem

tooirrational, though. Upon looking at a mathematical universe where torture was decided upon as a good thing, it isn’t an obvious failure of rationality to hope that a cosmic ray flips the sign bit of the utility function of an agent in there.The practical problem with values that care about other mathematical worlds, however, is that if the agent you built has a UDT prior over values, it’s an improvement (from the perspective of the prior) for the nosy neigbors/values that care about other worlds, to dictate some of what happens in your world (since the marginal contribution of your world to the prior expected utility looks like some linear combination of the various utility functions, weighted by how much they care about your world) So, in practice, it’d be a bad idea to build a UDT value learning prior containing utility functions that have preferences over all worlds, since it’d add a bunch of extra junk from different utility functions to our world if run.

Are you talking about something like this?

“I’m grateful to HAL for telling me that cows have feelings. Now I’m pretty sure that, even if HAL had a glitch and mistakenly told me that cows are devoid of feeling, eating them would still be wrong.”

That’s valid reasoning. The right way to formalize it is to have two worlds, one where eating cows is okay and another where eating cows is not okay, without any “nosy preferences”. Then you receive probabilistic evidence about which world you’re in, and deal with it in the usual way.