Yes, that’s basically the same as what I mean by “Universal precommitment” framing. Weidness is in the fact that usually (I think, in all other decision-theoretic problems I ever encountered) “functional” and “anthropic” framings point in the same direction, but here they are not.
Wei’s motivating example for UDT1.1 is exactly that. It is indeed weird that Eliezer’s FDT paper doesn’t use the idea of optimizing over input-output maps, despite coming out later. But anyway, “folklore” (which is slowly being forgotten it seems) does know the proper way to handle this.
I don’t think “functional” and “anthropic” approaches are meaningful in this motivating example. There aren’t multiple instances of the same program with the same input.
Yes, that’s basically the same as what I mean by “Universal precommitment” framing. Weidness is in the fact that usually (I think, in all other decision-theoretic problems I ever encountered) “functional” and “anthropic” framings point in the same direction, but here they are not.
Wei’s motivating example for UDT1.1 is exactly that. It is indeed weird that Eliezer’s FDT paper doesn’t use the idea of optimizing over input-output maps, despite coming out later. But anyway, “folklore” (which is slowly being forgotten it seems) does know the proper way to handle this.
I don’t think “functional” and “anthropic” approaches are meaningful in this motivating example. There aren’t multiple instances of the same program with the same input.