Many rationalists have an icky situation where their beliefs have a certain particular incentive not to correspond to reality: when they consider such situations, they would prefer not to consider how much truth there is in the claim that they are better off avoiding such topics. For example, in each of the above claims, you imply that the ugh field is based completely on falsehood. In reality, however, there is a good deal of truth in it in each case:
“But in reality, your awareness of the food is independent of its existence.” The badness of the food for you does partly depend on your awareness of it. There is plenty of food rotting in dumps all over the world, and this does not affect any of us. So the rotten food will indeed be worse for you in some ways if you clean the fridge.
“But in reality, this only pushes it closer to the deadline.” Again, you find it boring and painful to work on your homework. If you push it very close to the deadline, but then work on it because you have to, you will minimize the time spent on it, thus minimizing your pain.
“Your prediction is largely independent of your performance.” This is frequently just false; if you plan on 1 hour, you are likely to take 2 hours, while if you plan on 30 minutes again, you are likely to take 1 hour again.
I wonder if my examples may have just been bad. Do you agree about my general point about flinch-y type topics being hard to debug and Litany of Gendlin-style things to be useful for doing so?
EX:
In the food example, if you don’t know about rotting food, it’ll become more unpleasant to take out later on.
The homework example may actually be not as good. But note that if you do homework early, you save future You any more anguish thinking about how it’s undone.
For the planning thing, I think that I disagree with you. The literature on planning has some minor studies showing that estimation time does in part slightly positively affect performance (hence my use of “largely”), but I think there are far more severe consequences that can arise when your predictions are miscalibrated. (e.g. making promises you can’t keep, getting overloaded, etc.)
My general point is not that, all things considered, it is better in those particular cases to flinch away. I am saying that flinching has both costs and benefits, not only costs, and consequently there may be particular cases when you are better off flinching away.
Many rationalists have an icky situation where their beliefs have a certain particular incentive not to correspond to reality: when they consider such situations, they would prefer not to consider how much truth there is in the claim that they are better off avoiding such topics. For example, in each of the above claims, you imply that the ugh field is based completely on falsehood. In reality, however, there is a good deal of truth in it in each case:
“But in reality, your awareness of the food is independent of its existence.” The badness of the food for you does partly depend on your awareness of it. There is plenty of food rotting in dumps all over the world, and this does not affect any of us. So the rotten food will indeed be worse for you in some ways if you clean the fridge.
“But in reality, this only pushes it closer to the deadline.” Again, you find it boring and painful to work on your homework. If you push it very close to the deadline, but then work on it because you have to, you will minimize the time spent on it, thus minimizing your pain.
“Your prediction is largely independent of your performance.” This is frequently just false; if you plan on 1 hour, you are likely to take 2 hours, while if you plan on 30 minutes again, you are likely to take 1 hour again.
I wonder if my examples may have just been bad. Do you agree about my general point about flinch-y type topics being hard to debug and Litany of Gendlin-style things to be useful for doing so?
EX:
In the food example, if you don’t know about rotting food, it’ll become more unpleasant to take out later on.
The homework example may actually be not as good. But note that if you do homework early, you save future You any more anguish thinking about how it’s undone.
For the planning thing, I think that I disagree with you. The literature on planning has some minor studies showing that estimation time does in part slightly positively affect performance (hence my use of “largely”), but I think there are far more severe consequences that can arise when your predictions are miscalibrated. (e.g. making promises you can’t keep, getting overloaded, etc.)
My general point is not that, all things considered, it is better in those particular cases to flinch away. I am saying that flinching has both costs and benefits, not only costs, and consequently there may be particular cases when you are better off flinching away.