I freely admit that the problem may still be above my pay grade at this point, but your comment does accurately describe my dissatisfaction with some handlings of Newcomb’s problem I’ve seen in rationalist circles. It’s like they want the decision to have everything we recognize as “causing”, but not call it that.
Perhaps it would help to repeat an analogy someone made a while back here (I think it was PhilGoetz). It’s a mapping from Newcomb’s problem to the issue of revenge:
Have the disposition to one-box --> Have the disposition to take revenge (and vice versa)
Omega predicts you’ll one-box --> People deem you the type to take revenge (perhaps at great personal cost)
You look under the sealed box --> You find out how people treat you
You actually one-box --> You actually take revenge
The mapping isn’t perfect—people don’t have Omega-like predictive powers—but it’s close enough, since people can do much better than chance.
What happens when I one-box and find nothing? Well, as is permitted in some versions, Omega made a rare mistake, and its model of me didn’t show me one-boxing.
What happens when I’m revenge-oriented, but people cheat me on deals? Well, they guessed wrong, as could Omega. But you can see how the intention has causal influence, which ends once the “others” make their irreversible choice. Taking revenge doesn’t undo those acts, but it may prevent future ones.
Apologies if I’ve missed a discussion which has beaten this issue to death, which I probably have. Indeed, that was the complaint when (I think) PhilGoetz brought it up.
Update: PhilGoetz was the one who gave me the idea, in what was a quite reviled top-level post. But interestingly enough, in that thread, Eliezer_Yudkowsky said that his decision theory would have him take revenge, and for the same reason that he would one-box!
And here’s my remark showing my appreciation for PhilGoetz’s insight at the time. And under-rated post on his part, I think...
I freely admit that the problem may still be above my pay grade at this point, but your comment does accurately describe my dissatisfaction with some handlings of Newcomb’s problem I’ve seen in rationalist circles. It’s like they want the decision to have everything we recognize as “causing”, but not call it that.
Perhaps it would help to repeat an analogy someone made a while back here (I think it was PhilGoetz). It’s a mapping from Newcomb’s problem to the issue of revenge:
Have the disposition to one-box --> Have the disposition to take revenge (and vice versa)
Omega predicts you’ll one-box --> People deem you the type to take revenge (perhaps at great personal cost)
You look under the sealed box --> You find out how people treat you
You actually one-box --> You actually take revenge
The mapping isn’t perfect—people don’t have Omega-like predictive powers—but it’s close enough, since people can do much better than chance.
What happens when I one-box and find nothing? Well, as is permitted in some versions, Omega made a rare mistake, and its model of me didn’t show me one-boxing.
What happens when I’m revenge-oriented, but people cheat me on deals? Well, they guessed wrong, as could Omega. But you can see how the intention has causal influence, which ends once the “others” make their irreversible choice. Taking revenge doesn’t undo those acts, but it may prevent future ones.
Apologies if I’ve missed a discussion which has beaten this issue to death, which I probably have. Indeed, that was the complaint when (I think) PhilGoetz brought it up.
Update: PhilGoetz was the one who gave me the idea, in what was a quite reviled top-level post. But interestingly enough, in that thread, Eliezer_Yudkowsky said that his decision theory would have him take revenge, and for the same reason that he would one-box!
And here’s my remark showing my appreciation for PhilGoetz’s insight at the time. And under-rated post on his part, I think...