EDIT: it’s also possible John felt fine emotionally and was fully aware of his emotional state and actually was so good at not latching on to emotions that it was highly nontrivial to spot, or some combination. Leaving this comment in case it’s useful for others. I don’t like the tone though, I might’ve been very disassociated as a rationalist (and many are) but it’s not obvious John is from this alone or not.
As a meditator I pay a lot of attention to what emotion I’m feeling in high resolution and the causality between it and my thoughts and actions. I highly recommend this practice. What John describes in “plan predictor predicts failure” is something I notice several times a month & address. It’s 101 stuff when you’re orienting at it from the emotional angle, there’s also a variety of practices I can deploy (feeling emotions, jhanas, many hard to describe mental motions...) to get back to equilibrium and clear thinking & action. This has overall been a bigger update to my effectiveness than the sequences, plausibly my rationality too (I can finally be unbiased instead of trying to correct or pretend I’m not biased!)
Like, when I head you say “your instinctive plan-evaluator may end up with a global negative bias” I’m like, hm, why not just say “if you notice everything feels subtly heavier and like the world has metaphorically lost color” (how I notice it in myself. tbc fully nonverbally). Noticing through patterns of verbal thought also works, but it’s just less data to do metacognition over. You’re noticing correlations and inferring the territory (how you feel) instead of paying attention to how you feel directly (something which can be learned over time by directing attention towards noticing, not instantly)
I may write on this. Till then I highly recommend Joe Hudson’s work, it may require a small amount of woo tolerance, but only small. He coached Sam Altman & other top execs on emotional clarity & fluidity. Extremely good. Requires some practice & willingness to embrace emotional intensity (sometimes locally painful) though.
Like, when I head you say “your instinctive plan-evaluator may end up with a global negative bias” I’m like, hm, why not just say “if you notice everything feels subtly heavier and like the world has metaphorically lost color”
Because everything did not feel subtly heavier or like the world had metaphorically lost color. It was just, specifically, that most nontrivial things I considered doing felt like they’d suck somehow, or maybe that my attention was disproportionately drawn to the ways in which they might suck.
And to be clear, “plan predictor predicts failure” was not a pattern of verbal thought I noticed, it’s my verbal description of the things I felt on a non-verbal level. Like, there is a non-verbal part of my mind which spits out various feelings when I consider doing different things, and that part had a global negative bias in the feelings it spit out.
I use this sort of semitechnical language because it allows more accurate description of my underlying feelings and mental motions, not as a crutch in lieu of vague poetry.
EDIT: it’s also possible John felt fine emotionally and was fully aware of his emotional state and actually was so good at not latching on to emotions that it was highly nontrivial to spot, or some combination. Leaving this comment in case it’s useful for others. I don’t like the tone though, I might’ve been very disassociated as a rationalist (and many are) but it’s not obvious John is from this alone or not.
As a meditator I pay a lot of attention to what emotion I’m feeling in high resolution and the causality between it and my thoughts and actions. I highly recommend this practice. What John describes in “plan predictor predicts failure” is something I notice several times a month & address. It’s 101 stuff when you’re orienting at it from the emotional angle, there’s also a variety of practices I can deploy (feeling emotions, jhanas, many hard to describe mental motions...) to get back to equilibrium and clear thinking & action. This has overall been a bigger update to my effectiveness than the sequences, plausibly my rationality too (I can finally be unbiased instead of trying to correct or pretend I’m not biased!)
Like, when I head you say “your instinctive plan-evaluator may end up with a global negative bias” I’m like, hm, why not just say “if you notice everything feels subtly heavier and like the world has metaphorically lost color” (how I notice it in myself. tbc fully nonverbally). Noticing through patterns of verbal thought also works, but it’s just less data to do metacognition over. You’re noticing correlations and inferring the territory (how you feel) instead of paying attention to how you feel directly (something which can be learned over time by directing attention towards noticing, not instantly)
I may write on this. Till then I highly recommend Joe Hudson’s work, it may require a small amount of woo tolerance, but only small. He coached Sam Altman & other top execs on emotional clarity & fluidity. Extremely good. Requires some practice & willingness to embrace emotional intensity (sometimes locally painful) though.
Because everything did not feel subtly heavier or like the world had metaphorically lost color. It was just, specifically, that most nontrivial things I considered doing felt like they’d suck somehow, or maybe that my attention was disproportionately drawn to the ways in which they might suck.
And to be clear, “plan predictor predicts failure” was not a pattern of verbal thought I noticed, it’s my verbal description of the things I felt on a non-verbal level. Like, there is a non-verbal part of my mind which spits out various feelings when I consider doing different things, and that part had a global negative bias in the feelings it spit out.
I use this sort of semitechnical language because it allows more accurate description of my underlying feelings and mental motions, not as a crutch in lieu of vague poetry.