A small risk of losing the utility it was previously counting on.
Of course you can do intuition pumps either way- I don’t feel like I’d want the AI to sacrifice everything in the universe we know for a 0.01% chance of making it in a bigger universe- but some level of risk has to be worth a vast increase in potential fun.
It seems to me that expanding further would reduce the risk of losing the utility it was previously counting on.
LCPW isn’t even necessary: do you really think that it wouldn’t make a difference that you’d care about?
LCPW cuts two ways here, because there are two universal quantifiers in your claim. You need to look at every possible bounded utility function, not just every possible scenario. At least, if I understand you correctly, you’re claiming that no bounded utility function reflects your preferences accurately.