Ok, I’ve been talking it over with Benjamin Fox some more, and I don’t think Omega’s trustworthiness is the issue here. The issue is basically to come up with some decision-theoretic notion of “virtue”: “I should take action X because, timelessly speaking, a history in which I always respond to choice Y with action X nets me more money/utility/happiness than any other.” The idea is that taking action X or not doing so in any one particular instance can change which history we’re enacting, while normal decision theories reason only over the scope of a single choice-instance, with little regard for potential futures about which we don’t have specific information encoded in our causal graph.
The idea is that taking action X or not doing so in any one particular instance can change which history we’re enacting, while normal decision theories reason only over the scope of a single choice-instance, with little regard for potential futures about which we don’t have specific information encoded in our causal graph.
It seems to me that the impacts of being virtuous on one’s potential future is enough to justify being virtuous, and one does not need to take into account the impacts of being virtuous on alternative presents one might have faced instead. (Basically, instead of trusting that Omega would have given you something in an alternate world, you are trusting that human society is perceptive enough to notice and reward enough of your virtues to justify having them.)
Yes, we agree. “I will get rewarded for this behavior in the future at a profitable rate to justify my sacrifice in the present” is a reason to “self-sacrifice” in the present. The question is how to build a decision-theory that can encode this kind of knowledge without requiring actual prescience (that is, without needing to predict the specific place and time in which the agent will be rewarded).
Even using that notion of virtue, whether giving Omega the $100 benefits you only happens if Omega is trustworthy. So Omega’s trustworthiness can still be a deciding factor.
Ok, I’ve been talking it over with Benjamin Fox some more, and I don’t think Omega’s trustworthiness is the issue here. The issue is basically to come up with some decision-theoretic notion of “virtue”: “I should take action X because, timelessly speaking, a history in which I always respond to choice Y with action X nets me more money/utility/happiness than any other.” The idea is that taking action X or not doing so in any one particular instance can change which history we’re enacting, while normal decision theories reason only over the scope of a single choice-instance, with little regard for potential futures about which we don’t have specific information encoded in our causal graph.
It seems to me that the impacts of being virtuous on one’s potential future is enough to justify being virtuous, and one does not need to take into account the impacts of being virtuous on alternative presents one might have faced instead. (Basically, instead of trusting that Omega would have given you something in an alternate world, you are trusting that human society is perceptive enough to notice and reward enough of your virtues to justify having them.)
Yes, we agree. “I will get rewarded for this behavior in the future at a profitable rate to justify my sacrifice in the present” is a reason to “self-sacrifice” in the present. The question is how to build a decision-theory that can encode this kind of knowledge without requiring actual prescience (that is, without needing to predict the specific place and time in which the agent will be rewarded).
Even using that notion of virtue, whether giving Omega the $100 benefits you only happens if Omega is trustworthy. So Omega’s trustworthiness can still be a deciding factor.
Omega’s trustworthiness mostly just means we can assign a degenerate probability of 1.0 to all information we receive from Omega.