I am very much not an expert on this. But: I don’t see why bet 1 “subjunctively dominates” bet 2.
Suppose I’m currently planning to take bet 2, and suppose PA is able to prove that. Then I am expecting to get +1 from the bet.
Now, suppose we consider switching the output of my algorithm from “bet 2” to “bet 1″. Then, counterfactually, PA will no longer prove that I take bet 2, so I now expect to be taking bet 1 in the not-P case, for an outcome of −1.
This is not better than the +1 I am currently expecting to get by taking bet 2.
What am I missing? (My best guess is that you reckon the comparison I should be doing isn’t −1 after switching versus +1 before switching, but −1 after switching versus −10 from still taking bet 2 after changing the output of my algorithm, but if so then I don’t understand why that would be the right comparison to make.)
To be sure, switching to Bet 1 is great evidence that ¬P is true (that’s the whole point), but that’s not the sort of reasoning FDT recommends. Rather, the question is if we take the Peano axioms to be downstream of the output of the algorithm in the relevant sense.
As the authors make clear, FDT is supposed to be “structurally similar” to CDT [1], and in the same way CDT regards the history and the laws to be out of their control in Ahmed’s problems, FDT should arguably regard the Peano axioms to be out of their control (i.e., “upstream” of the algorithm). What could be more upstream?
Levinstein and Soares write (page 2): “FDT is structurally similar to CDT, but it rectifies this mistake by recognizing that logical dependencies are decision-relevant as well.”.
But wouldn’t what Peano is capable of proving about your specific algorithm necessarily be “downstream” of the output of that algorithm itself? The Peano axioms are upstream, yes, but what Peano proves about a particular function depends on what that function is.
I think maybe we’re running into the problem that FDT isn’t (AIUI) really very precisely defined. But I think I agree with Zane’s reply to your comment: two (apparently) possible worlds where my algorithm produces different decisions are also worlds where PA proves that it does (or at least they might be; PA can’t prove everything that’s true) because those are worlds where I’m running different algorithms. And unless I’m confused (which I very much might be) that’s much of the point of FDT: we recognize different decisions as being consequences of running different algorithms.
I am very much not an expert on this. But: I don’t see why bet 1 “subjunctively dominates” bet 2.
Suppose I’m currently planning to take bet 2, and suppose PA is able to prove that. Then I am expecting to get +1 from the bet.
Now, suppose we consider switching the output of my algorithm from “bet 2” to “bet 1″. Then, counterfactually, PA will no longer prove that I take bet 2, so I now expect to be taking bet 1 in the not-P case, for an outcome of −1.
This is not better than the +1 I am currently expecting to get by taking bet 2.
What am I missing? (My best guess is that you reckon the comparison I should be doing isn’t −1 after switching versus +1 before switching, but −1 after switching versus −10 from still taking bet 2 after changing the output of my algorithm, but if so then I don’t understand why that would be the right comparison to make.)
To be sure, switching to Bet 1 is great evidence that ¬P is true (that’s the whole point), but that’s not the sort of reasoning FDT recommends. Rather, the question is if we take the Peano axioms to be downstream of the output of the algorithm in the relevant sense.
As the authors make clear, FDT is supposed to be “structurally similar” to CDT [1], and in the same way CDT regards the history and the laws to be out of their control in Ahmed’s problems, FDT should arguably regard the Peano axioms to be out of their control (i.e., “upstream” of the algorithm). What could be more upstream?
Levinstein and Soares write (page 2): “FDT is structurally similar to CDT, but it rectifies this mistake by recognizing that logical dependencies are decision-relevant as well.”.
But wouldn’t what Peano is capable of proving about your specific algorithm necessarily be “downstream” of the output of that algorithm itself? The Peano axioms are upstream, yes, but what Peano proves about a particular function depends on what that function is.
I think maybe we’re running into the problem that FDT isn’t (AIUI) really very precisely defined. But I think I agree with Zane’s reply to your comment: two (apparently) possible worlds where my algorithm produces different decisions are also worlds where PA proves that it does (or at least they might be; PA can’t prove everything that’s true) because those are worlds where I’m running different algorithms. And unless I’m confused (which I very much might be) that’s much of the point of FDT: we recognize different decisions as being consequences of running different algorithms.