I’d like to hear more about how you think discounting should work in a rational agent, on more conventional topics than time travel.
I don’t think discounting should be used at all, and that rational facts about the past and future (eg expected future wealth) should be used to get discount-like effects instead.
However, there are certain agent designs (AIXI, unbounded utility maximisers, etc...) that might need discounting as a practical tool. In those cases, adding this hack could allow them to discount while reducing the negative effects.
Utility can’t be stored, and gets re-evaluated for each decision.
Depends. Utility that sums (eg total hedonistic utilitarianism, reward-agent made into a utility maximiser, etc...) does accumulate. Some other variants have utility that accumulates non-linearly. Many non-accumulating utilities might have an accumulating component.
I don’t think discounting should be used at all, and that rational facts about the past and future (eg expected future wealth) should be used to get discount-like effects instead.
However, there are certain agent designs (AIXI, unbounded utility maximisers, etc...) that might need discounting as a practical tool. In those cases, adding this hack could allow them to discount while reducing the negative effects.
Depends. Utility that sums (eg total hedonistic utilitarianism, reward-agent made into a utility maximiser, etc...) does accumulate. Some other variants have utility that accumulates non-linearly. Many non-accumulating utilities might have an accumulating component.