The simple answer is “judge everything in your future by your current utility function”, but that doesn’t seem satisfactory.
It sounds satisfactory for agents that have utility functions. Humans don’t (unless you mean implicit utility functions under reflection, to the extent that different possible reflections converge), and I think it’s really misleading to talk as if we do.
Also, while this is just me, I strongly doubt our notional-utility-functions-upon-reflection contain anything as specific as preferences about polygamy.
Also, while this is just me, I strongly doubt our notional-utility-functions-upon-reflection contain anything as specific as preferences about polygamy.
That was just an example; people react differently to the idea that their values may change in the future, depending on the person and depending on the value.
It sounds satisfactory for agents that have utility functions. Humans don’t (unless you mean implicit utility functions under reflection, to the extent that different possible reflections converge), and I think it’s really misleading to talk as if we do.
Also, while this is just me, I strongly doubt our notional-utility-functions-upon-reflection contain anything as specific as preferences about polygamy.
That was just an example; people react differently to the idea that their values may change in the future, depending on the person and depending on the value.