With an article centrally about commitments I’d expect precommitment to have the standard meaning of commitment + restriction of future choices. It seems like your usages of “precommitment” are constantly shifting between your stated definition of “commitment we make about our own future behavior” and the standing meaning.
Using Parfit’s hitchhiker to explain the cooperation value of precommitments is weak. Functional Decision Theory can solve this without precommitments in the usual sense (though you can say precommitments are simply baked into FDT everywhere). Any rational person who value honesty at $1,001 or above would also solve your Parfit’s hitchhiker.
I’ve updated the footnote to specify that I mean commitments about our future behavior which typically, as a consequence, narrow our future options. I think that’s basically just the same as saying a commitment about future behavior that carries any weight at all. Yes, I would say that the reason FDT solves Parfit’s hitchhiker is exactly because of its greater power to precommit than e.g. CDT (which is basically unable to precommit at all.) Thinking about a value like honesty with a quantifiable dollar figure doesn’t really reflect our behavior or desires about it. Treating honesty like a numerical preference basically forces it to be nothing more than a scalar input to a calculation you’re always running. A big part of the point of the Mary/Jane example is that that’s a bad idea when it comes to accounting for values you actually care about.
With an article centrally about commitments I’d expect precommitment to have the standard meaning of commitment + restriction of future choices. It seems like your usages of “precommitment” are constantly shifting between your stated definition of “commitment we make about our own future behavior” and the standing meaning.
Using Parfit’s hitchhiker to explain the cooperation value of precommitments is weak. Functional Decision Theory can solve this without precommitments in the usual sense (though you can say precommitments are simply baked into FDT everywhere). Any rational person who value honesty at $1,001 or above would also solve your Parfit’s hitchhiker.
I’ve updated the footnote to specify that I mean commitments about our future behavior which typically, as a consequence, narrow our future options. I think that’s basically just the same as saying a commitment about future behavior that carries any weight at all. Yes, I would say that the reason FDT solves Parfit’s hitchhiker is exactly because of its greater power to precommit than e.g. CDT (which is basically unable to precommit at all.) Thinking about a value like honesty with a quantifiable dollar figure doesn’t really reflect our behavior or desires about it. Treating honesty like a numerical preference basically forces it to be nothing more than a scalar input to a calculation you’re always running. A big part of the point of the Mary/Jane example is that that’s a bad idea when it comes to accounting for values you actually care about.