Also, an agent that could change his preferences could just give himself good outcomes by choosing to change his preferences to the outcomes that he will get. The point is that if you could do this, you would not increase your expected utility, but instead create a different agent with a very convenient utility function. This is not something I would want to do.
Oh, yes, you are right, I forgot that he doesn’t get to see the result.
Also, an agent that could change his preferences could just give himself good outcomes by choosing to change his preferences to the outcomes that he will get. The point is that if you could do this, you would not increase your expected utility, but instead create a different agent with a very convenient utility function. This is not something I would want to do.
Oh, yes, you are right, I forgot that he doesn’t get to see the result.