I don’t see any problem whatsoever with time travel + consequentialism. As a consequentialist, I have preferences about the past just as much as about the future. But I don’t know how to affect the past, so if necessary I’ll settle for optimizing only the future.
The ideal choice is: argmax over actions A of utility( what happens if I do A ). Time travel may complicate the predicting of what happens (as if that wasn’t hard enough already), but doesn’t change the form of the answer.
Btw, my favorite model of time travel is described here (summary: locally ordinary physical causality plus closed timelike curves is still consistent). Causal decision theory probably chokes on it, but that’s nothingnew, and has to do with a bad formalization of “if I do A”, not due to the focus on outcomes.
I don’t see any problem whatsoever with time travel + consequentialism. As a consequentialist, I have preferences about the past just as much as about the future. But I don’t know how to affect the past, so if necessary I’ll settle for optimizing only the future.
The ideal choice is: argmax over actions A of utility( what happens if I do A ). Time travel may complicate the predicting of what happens (as if that wasn’t hard enough already), but doesn’t change the form of the answer.
Btw, my favorite model of time travel is described here (summary: locally ordinary physical causality plus closed timelike curves is still consistent). Causal decision theory probably chokes on it, but that’s nothing new, and has to do with a bad formalization of “if I do A”, not due to the focus on outcomes.