Does Thinking Hard Hurt Your Brain?

This is a “typical mind fallacy check” post. Curious how much (within and without the rationalsphere) people’s experience varies.

I generally experience “thinking hard” to be some combination of stressful, headache inducing, and energy draining (sometimes I feel like I actually am burnt out of energy, sometimes I just need to switch tasks). I’ve talked to other rationalists and they often don’t have this experience, and I’m trying to figure out what’s going on here.

I specifically experience this when doing conscious, deliberate thought. In particular if it strains the bounds of my current skills, or my working memory. This most often involves thinking strategically in a careful way (i.e. I get this when playing Chess).

When I’m writing characters or imagining talking to people I know (i.e. simulating another person in my head, or pretending to be another person) I also get a headache if I keep it up for over an hour.

I’ve only talked to a few people about this, so not sure how wide the spread of experience is. But at the very least this varies a bit among people-I-know.

Some hypotheses (or partial hypotheses) so far:

A. Deliberate Practice is straining. Doing deliberate-practice thinking (i.e. straining at the edge of your competence of a skill) is energy draining, and people vary on a) what things strain the edge of their competence, and b) how often they do such things.

B. Straining is wasted motion. Someone I know recently argued that thinking shouldn’t drain willpower or otherwise cost resources other than time, and that any time you do that, it’s because you’re doing some wasted motion. This initially sounded wrong to me. After experimenting a bit I at least believe that much of “draining” thinking is wasted motion that you can learn to skip.

C. People vary in raw cognitive power. Some people may just think faster naturally, or have a higher bandwidth of working memory.

For now, mostly interested in getting a sense of what diversity of experience we have of this on LessWrong. I also suspect there’s some research into this somewhere and am curious if anyone either knows about that, or has further thoughts on mechanisms.