You mean, why I expect a person-affecting utility function to be different if evaluated today v. tomorrow?
Well, suppose that today I consider the action of creating a person, and am indifferent to creating them. Since this is true for all sorts of people, I am indifferent to creating them one way vs. another (e.g. happy vs sad). If they are to be created inside my guest bedroom, this means I am indifferent between certain ways the atoms in my guest bedroom could be arranged. Then if this person gets created tonight and is around tomorrow, I’m no longer indifferent between the arrangement that is them sad and the arrangement that is them happy.
Yes, you could always reverse-engineer a utility function over world-histories that encompasses both of these. But this doesn’t necessarily solve the problems that come to mind when I say “change in utility functions”—for example, I might take bets about the future that appear lose/lose when I have to pay them off, or take actions that modify my own capabilities in ways I later regret.
I dunno—were you thinking of some specific application of indifference that could sidestep some of these problems?
Can you give a concrete example for why the utility function should change?
You mean, why I expect a person-affecting utility function to be different if evaluated today v. tomorrow?
Well, suppose that today I consider the action of creating a person, and am indifferent to creating them. Since this is true for all sorts of people, I am indifferent to creating them one way vs. another (e.g. happy vs sad). If they are to be created inside my guest bedroom, this means I am indifferent between certain ways the atoms in my guest bedroom could be arranged. Then if this person gets created tonight and is around tomorrow, I’m no longer indifferent between the arrangement that is them sad and the arrangement that is them happy.
Yes, you could always reverse-engineer a utility function over world-histories that encompasses both of these. But this doesn’t necessarily solve the problems that come to mind when I say “change in utility functions”—for example, I might take bets about the future that appear lose/lose when I have to pay them off, or take actions that modify my own capabilities in ways I later regret.
I dunno—were you thinking of some specific application of indifference that could sidestep some of these problems?