[Question] Does improved introspection cause rationalisation to become less noticeable?

I’ve recently updated that noticing is a key rationality skill—not just noticing confusion, but noticing your cognition more generally. This allows you to figure out at a very granular level why you’re not reaching your goals, and then intervene to change those reasons.

For example:

At one point I found myself procrastinating on ordering the catering for an event. Noticing the disconnect between my high-level goals (“make a good event”) and my concrete actions (“spend time on FB”), triggered me to try to notice what was up in my mind (this is a particular trigger-response pattern I’ve trained myself to use). I found that I didn’t want to make the call since last time I called them, they couldn’t hear what I was saying and were kind of rude about it. I didn’t want my phone to be bad or my accent to be inaudible, an so I didn’t want to call them again. I then proceeded to borrow a friend’s phone, and called them without problem.

Another example, this time with a concrete cognitive rather than practical intervention:

I noticed myself being unhappier than I wanted to be. So when the unhappiness clashed with the higher-level desire for happiness, it triggered a noticing process, and I realised my mind was running an algorithm like: “notice happy thought --> remember Hamming problem or that timelines might be short --> feel bad”. This sounds ridicolously unhelpful when written out, but is in fact what was going on. So I started training myself to hold on to the happiness in the first part of the chain without automatically falling into the second.

Here’s a worry with this: if part of my congition is consciously accessible and interpretable, and part of it is not, will extensive noticing-and-intervening cause motivated cognition to become less noticable?

It will by selection effects, since the more noticeable parts I’ll change. But this feels more definitionally true than actually worrying.

It also might by negative reinforcement, if my mind learns that when subagents make their desires known they’ll tend to be overruled/​modified. (To prevent this, and as a safer policy in my current epistemic state, I make sure to sometimes deliberately not intervene on things I’ve noticed.) But this shouldn’t be the case if I genuinely listen to subagents and take their preferences into account; as well as if the subagent theory doesn’t fit (which seems more plausible in the second example above).


Is there some other reason to believe that improved ability to notice your cognition will cause rationalisation, motivated cognition, thought patterns highly valued by certain subagents, etc. to become less noticeable?