I’ve updated the footnote to specify that I mean commitments about our future behavior which typically, as a consequence, narrow our future options. I think that’s basically just the same as saying a commitment about future behavior that carries any weight at all. Yes, I would say that the reason FDT solves Parfit’s hitchhiker is exactly because of its greater power to precommit than e.g. CDT (which is basically unable to precommit at all.) Thinking about a value like honesty with a quantifiable dollar figure doesn’t really reflect our behavior or desires about it. Treating honesty like a numerical preference basically forces it to be nothing more than a scalar input to a calculation you’re always running. A big part of the point of the Mary/Jane example is that that’s a bad idea when it comes to accounting for values you actually care about.
I’ve updated the footnote to specify that I mean commitments about our future behavior which typically, as a consequence, narrow our future options. I think that’s basically just the same as saying a commitment about future behavior that carries any weight at all. Yes, I would say that the reason FDT solves Parfit’s hitchhiker is exactly because of its greater power to precommit than e.g. CDT (which is basically unable to precommit at all.) Thinking about a value like honesty with a quantifiable dollar figure doesn’t really reflect our behavior or desires about it. Treating honesty like a numerical preference basically forces it to be nothing more than a scalar input to a calculation you’re always running. A big part of the point of the Mary/Jane example is that that’s a bad idea when it comes to accounting for values you actually care about.