Are these really “fair” problems? Is there some intelligible sense in which they are not fair, but Newcomb’s problem is fair? It certainly looks like Omega may be “rewarding irrationality” (i.e. giving greater gains to someone who runs an inferior decision theory), but that’s exactly the argument that CDT theorists use about Newcomb.
In Newcomb’s Problem, Omega determines ahead of time what decision theory you use. In these problems, it selects an arbitrary decision theory ahead of time. As such, for any agent using this preselected decision theory, these problems are variations of Newcomb’s problem. For any agent using a different decision theory, the problem is quite different (and simpler.) Thus, whatever agent has had it’s decision theory preselected can only perform as well as in a standard Newcomb’s problem, while a luckier agent may perform better. In other words, there are equivalent problems where Omega bases its decision on the results of a CDT or EDT output, in which they actually perform worse than TDT does in these problems.
In Newcomb’s Problem, Omega determines ahead of time what decision theory you use. In these problems, it selects an arbitrary decision theory ahead of time. As such, for any agent using this preselected decision theory, these problems are variations of Newcomb’s problem. For any agent using a different decision theory, the problem is quite different (and simpler.) Thus, whatever agent has had it’s decision theory preselected can only perform as well as in a standard Newcomb’s problem, while a luckier agent may perform better. In other words, there are equivalent problems where Omega bases its decision on the results of a CDT or EDT output, in which they actually perform worse than TDT does in these problems.