Of course, for fallible biological agents the assumption of rationality doesn’t hold—you are free to predict that future-you becomes stupid or biased in ways you don’t currently accept. In those cases, it can be reasonable to try to bind your future self based on your current self’s beliefs and preferences.
See also https://www.lesswrong.com/posts/jiBFC7DcCrZjGmZnJ/conservation-of-expected-evidence . You are correct, of course—if you are a rational agent and you believe your future self has remained rational, then https://en.wikipedia.org/wiki/Aumann%27s_agreement_theorem actually applies here—you know you have the same priors, and mutual (across time, but so what?) knowledge of rationality. You cannot rationally disagree with your future self.
Of course, for fallible biological agents the assumption of rationality doesn’t hold—you are free to predict that future-you becomes stupid or biased in ways you don’t currently accept. In those cases, it can be reasonable to try to bind your future self based on your current self’s beliefs and preferences.