I don’t know whether this bears directly on this point, but I am reminded of a discussion in Toby Ord’s PhD thesis on how just as the consequences of an action propagate forwards in time, rightness propagates backwards. If it is right to pull the lever, it is right to push the button that pulls the lever, right to throw the ball that pushes the button that pulls the lever and so on.
This struck me as an argument for consequentialism in itself, since this observation is a natural consequence of consequentialism and doesn’t follow so obviously from deontology, but perhaps this kind of thinking is built in in a way I don’t see.
Can consequentialism handle the possibility of time-travel? If not, then something may be wrong with consequentialism, regardless of whether time-travel is actually possible or not.
One of the intuitions leading me to deontology is exactly the time-symmetry of physics. Almost by definition, the rightness of an act can only be perfectly decided by an outside observer of the space-time continuum. (I could call the observer God, but I don’t want to be modded down by inattentive mods.) Now, maybe I have read too much Huw Price and Gary Drescher, but I don’t think this fictional outside observer would care too much about the local direction of the thermodynamic arrow of time.
I don’t see any problem whatsoever with time travel + consequentialism. As a consequentialist, I have preferences about the past just as much as about the future. But I don’t know how to affect the past, so if necessary I’ll settle for optimizing only the future.
The ideal choice is: argmax over actions A of utility( what happens if I do A ). Time travel may complicate the predicting of what happens (as if that wasn’t hard enough already), but doesn’t change the form of the answer.
Btw, my favorite model of time travel is described here (summary: locally ordinary physical causality plus closed timelike curves is still consistent). Causal decision theory probably chokes on it, but that’s nothingnew, and has to do with a bad formalization of “if I do A”, not due to the focus on outcomes.
I don’t think so, but I’d be happy to hear why you say that.
I don’t know whether this bears directly on this point, but I am reminded of a discussion in Toby Ord’s PhD thesis on how just as the consequences of an action propagate forwards in time, rightness propagates backwards. If it is right to pull the lever, it is right to push the button that pulls the lever, right to throw the ball that pushes the button that pulls the lever and so on.
This struck me as an argument for consequentialism in itself, since this observation is a natural consequence of consequentialism and doesn’t follow so obviously from deontology, but perhaps this kind of thinking is built in in a way I don’t see.
Can consequentialism handle the possibility of time-travel? If not, then something may be wrong with consequentialism, regardless of whether time-travel is actually possible or not.
One of the intuitions leading me to deontology is exactly the time-symmetry of physics. Almost by definition, the rightness of an act can only be perfectly decided by an outside observer of the space-time continuum. (I could call the observer God, but I don’t want to be modded down by inattentive mods.) Now, maybe I have read too much Huw Price and Gary Drescher, but I don’t think this fictional outside observer would care too much about the local direction of the thermodynamic arrow of time.
I don’t see any problem whatsoever with time travel + consequentialism. As a consequentialist, I have preferences about the past just as much as about the future. But I don’t know how to affect the past, so if necessary I’ll settle for optimizing only the future.
The ideal choice is: argmax over actions A of utility( what happens if I do A ). Time travel may complicate the predicting of what happens (as if that wasn’t hard enough already), but doesn’t change the form of the answer.
Btw, my favorite model of time travel is described here (summary: locally ordinary physical causality plus closed timelike curves is still consistent). Causal decision theory probably chokes on it, but that’s nothing new, and has to do with a bad formalization of “if I do A”, not due to the focus on outcomes.