I don’t really understand your approach yet. Let’s call your decision theory CLDT. You say counterfactuals in CLDT should correspond to consistent universes. For example, the counterfactual “what if a CLDT agent two-boxed in Newcomb’s problem” should correspond to a consistent universe where a CLDT agent two-boxes on Newcomb’s problem. Can you describe that universe in more detail?
You say counterfactuals in CLDT should correspond to consistent universes
That’s not quite what I wrote in this article:
However, this now seems insufficient as I haven’t explained why we should maintain the consistency conditions over comparability after making the ontological shift. In the past, I might have said that these consistency conditions are what define the problem and that if we dropped them it would no longer be Newcomb’s Problem… My current approach now tends to put more focus on the evolutionary process that created the intuitions and instincts underlying these incompatible demands as I believe that this will help us figure out the best way to stitch them together.
I’ll respond to the other component of your question later.
I don’t really understand your approach yet. Let’s call your decision theory CLDT. You say counterfactuals in CLDT should correspond to consistent universes. For example, the counterfactual “what if a CLDT agent two-boxed in Newcomb’s problem” should correspond to a consistent universe where a CLDT agent two-boxes on Newcomb’s problem. Can you describe that universe in more detail?
That’s not quite what I wrote in this article:
I’ll respond to the other component of your question later.