I realized that I’d been asking the wrong question. I had been asking which decision would give the best payoff at the time and saying it was rational to make that decision. Instead, I should have been asking which decision theory would lead to the greatest payoff.
I wonder if it is possible to go one more step: instead of asking which decision theory to use (to make decisions), we should ask which meta-decision theory we should use (to choose decision theories). In that case, maybe we would find ourselves using EDT for Newcomb-like problems (and winning), but a simpler decision theory for some other problems, where EDT is not required to win.
I don’t know what a meta-decision theory would look like (I barely know what a decision theory looks like).
I think that this just gets rolled into your overall decision theory.
For instance, suppose we have two programs. We give all odd numbers to program 1 and it performs some action. We give all even numbers to program 2 and it performs some other action. On the surface, it looks like we’ve got 2 different programs and a meta level procedure for deciding which to use. But of course, it’s trivial to code this whole system up into a single program that takes an integer and does the correct thing with it.
My point being that I think it’s misleading to try and suggest two decision theories would be at work in your example. You’ve just got one big decision theory that does different stuff at different levels (which some decision theories already do anyway).
As many of us here secretly hope, the meta-decision theory must “reproduce itself” as the object-level decision theory. Just don’t ask me what this means formally.
That makes sense. It implies that we wouldn’t find ourselves using different object-level decision theories in different situations.
(But is it possible to construct a problem analogous to Newcomb’s on which EDT loses? If so it seems we would need different object-level DTs after all.)
I wonder if it is possible to go one more step: instead of asking which decision theory to use (to make decisions), we should ask which meta-decision theory we should use (to choose decision theories). In that case, maybe we would find ourselves using EDT for Newcomb-like problems (and winning), but a simpler decision theory for some other problems, where EDT is not required to win.
I don’t know what a meta-decision theory would look like (I barely know what a decision theory looks like).
I think that this just gets rolled into your overall decision theory.
For instance, suppose we have two programs. We give all odd numbers to program 1 and it performs some action. We give all even numbers to program 2 and it performs some other action. On the surface, it looks like we’ve got 2 different programs and a meta level procedure for deciding which to use. But of course, it’s trivial to code this whole system up into a single program that takes an integer and does the correct thing with it.
My point being that I think it’s misleading to try and suggest two decision theories would be at work in your example. You’ve just got one big decision theory that does different stuff at different levels (which some decision theories already do anyway).
As many of us here secretly hope, the meta-decision theory must “reproduce itself” as the object-level decision theory. Just don’t ask me what this means formally.
That makes sense. It implies that we wouldn’t find ourselves using different object-level decision theories in different situations.
(But is it possible to construct a problem analogous to Newcomb’s on which EDT loses? If so it seems we would need different object-level DTs after all.)
As I wrote elsewhere in this thread, see the Newcomb’s variant with transparent boxes, or Parfit’s Hitchhiker.
The Smoking Lesion?