I think maybe we’re running into the problem that FDT isn’t (AIUI) really very precisely defined. But I think I agree with Zane’s reply to your comment: two (apparently) possible worlds where my algorithm produces different decisions are also worlds where PA proves that it does (or at least they might be; PA can’t prove everything that’s true) because those are worlds where I’m running different algorithms. And unless I’m confused (which I very much might be) that’s much of the point of FDT: we recognize different decisions as being consequences of running different algorithms.
I think maybe we’re running into the problem that FDT isn’t (AIUI) really very precisely defined. But I think I agree with Zane’s reply to your comment: two (apparently) possible worlds where my algorithm produces different decisions are also worlds where PA proves that it does (or at least they might be; PA can’t prove everything that’s true) because those are worlds where I’m running different algorithms. And unless I’m confused (which I very much might be) that’s much of the point of FDT: we recognize different decisions as being consequences of running different algorithms.