Also, of course, one who at each moment makes the decision that maximises expected future utility defects against Clippy in both Prisoner’s Dilemma and Parfit’s Hitchhiker scenarios, and arguably two-boxes against Omega, and by EY’s definition that counts as “not winning” because of the negative consequences of Clippy/Omega knowing that that’s what we do.
I think I’m misunderstanding you here because this looks like a contradiction. Why does making the decision that maximizes expected utility necessarily have negative consequences? It sounds like you’re working under a decision theory that involves preference reversals.
I’m talking about the difference between CDT, which stiffs the lift-giver in Parfit’s Hitchhiker and so never gets a lift, and other decision theories.
I think I’m misunderstanding you here because this looks like a contradiction. Why does making the decision that maximizes expected utility necessarily have negative consequences? It sounds like you’re working under a decision theory that involves preference reversals.
I’m talking about the difference between CDT, which stiffs the lift-giver in Parfit’s Hitchhiker and so never gets a lift, and other decision theories.
Oh, I see. I thought you were saying an optimal decision theory stiffed the lift-giver.
I hope I’ve become clearer in the four years since I wrote that!
. . . did not notice the date-stamp. Good thing thread necros are allowed here.