Newcomb’s problem tries to show that CDT only caring about things caused by your decisions afterwards can be a weakness by providing an example where things caused by accurate predictions of your decisions outweight those things. Everything else is just window dressing. You are using the window dressing to explain how you care about these other things caused by the decision, so you coincidentally act just as if you also cared about the causes of accurate predictions of your decisions. But as long as you make out the things caused by the decision that should, according to the intention of the problem statement, cause the less desirable things afterwards actually cause more desirable things afterwards you are not addressing Newcomb’s problem. You are just showing that what is a particular formulation of Newcomb’s problem for most people isn’t a formulation of Newcomb’s problem for you. In a way that doesn’t generalize.
The “accurate prediction” is a central part of Newcomb’s problem. The issue of whether it’s possible (I feel it is) and IN WHAT WAYS it is possible, are central to the validity of Newcomb’s problem.
If all possible ways of the accurate prediction were to make CDT work, then Newcomb’s problem wouldn’t be a problem for CDT. (apart from the practical one of it being hard to apply correctly)
At present, it seems like there are possible ways that make CDT work, and possible ways that make CDT not work. If it were to someday be proved that all possible ways make CDT work, that would be a major proof. If it were to be proved (beyond all doubt) that a possible way was completely incompatible with CDT, that could also be important for AI creation.
Newcomb’s problem tries to show that CDT only caring about things caused by your decisions afterwards can be a weakness by providing an example where things caused by accurate predictions of your decisions outweight those things. Everything else is just window dressing. You are using the window dressing to explain how you care about these other things caused by the decision, so you coincidentally act just as if you also cared about the causes of accurate predictions of your decisions. But as long as you make out the things caused by the decision that should, according to the intention of the problem statement, cause the less desirable things afterwards actually cause more desirable things afterwards you are not addressing Newcomb’s problem. You are just showing that what is a particular formulation of Newcomb’s problem for most people isn’t a formulation of Newcomb’s problem for you. In a way that doesn’t generalize.
The “accurate prediction” is a central part of Newcomb’s problem. The issue of whether it’s possible (I feel it is) and IN WHAT WAYS it is possible, are central to the validity of Newcomb’s problem.
If all possible ways of the accurate prediction were to make CDT work, then Newcomb’s problem wouldn’t be a problem for CDT. (apart from the practical one of it being hard to apply correctly)
At present, it seems like there are possible ways that make CDT work, and possible ways that make CDT not work. If it were to someday be proved that all possible ways make CDT work, that would be a major proof. If it were to be proved (beyond all doubt) that a possible way was completely incompatible with CDT, that could also be important for AI creation.