I don’t see how I can simultaneously want to build a habit of doing X, not expect to do X, but still actually do X, and not just once, but regularly. Isn’t this explicit doublethink? I mean, either I believe my realistic estimate (but then I expect it) or I screw up my ability to model my own behavior (e.g. by having bad calibration or introspection).
This is a language problem: I’m using “expect” in the prospect theory sense here, not the probabilistic one. It’s about emotional investment in the outcome, not anticipating probability of occurrence.
You could say that it’s a distinction between “should” expectations and “is” expectations. Prospect theory—or at the very least this application of it—is about “should” expectations, as it’s the basis for establishing a decision frame , which includes a notion of investment/cost as well as expected utility.
The hack I’m experimenting with is setting a perceptual frame where not doing the desired action is perceived as zero loss, and doing the action is perceived as a cheap gain. (In contrast to having an expectation that the default should be that I do the action, in which case doing it is perceive as zero-gain, and failing to do it is a loss.)
I don’t have any long-term experience with the reinforcement aspect yet, but my early results so far (1 behavior, 3 instances in two days) is that the framing is fun. It feels like “You mean I get points just for doing that little thing? Cool!”
(The trickiest part was that I had to first mind-hack away the mental blocks that made it seem low-status to me to think this way.)
This is a language problem: I’m using “expect” in the prospect theory sense here, not the probabilistic one. It’s about emotional investment in the outcome, not anticipating probability of occurrence.
You could say that it’s a distinction between “should” expectations and “is” expectations. Prospect theory—or at the very least this application of it—is about “should” expectations, as it’s the basis for establishing a decision frame , which includes a notion of investment/cost as well as expected utility.
The hack I’m experimenting with is setting a perceptual frame where not doing the desired action is perceived as zero loss, and doing the action is perceived as a cheap gain. (In contrast to having an expectation that the default should be that I do the action, in which case doing it is perceive as zero-gain, and failing to do it is a loss.)
I don’t have any long-term experience with the reinforcement aspect yet, but my early results so far (1 behavior, 3 instances in two days) is that the framing is fun. It feels like “You mean I get points just for doing that little thing? Cool!”
(The trickiest part was that I had to first mind-hack away the mental blocks that made it seem low-status to me to think this way.)