Born too late to explore Earth; born too early to explore the galaxy; born just the right time to save humanity.
Ulisse Mini
Do some inner work (emotional/meditative) to learn to process emotions and feel them without suffering. I highly recommend Joe Hudson’s stuff. Huge missing piece of the rationalist project, we make emotional decisions and can’t just “notice and be unmoved” by what we’re feeling, you need to get back to a secure equilibrium where there’s no big emotional experience you’re avoiding.
Don’t fall entirely into the rationalist frame, it has some great stuff and also a lot of subtle bias that can drive you half insane. Speaking from experience lol.
Comet King’s wife
LW generally doesn’t seem to value emotional intelligence and relational maturity very highly relative to intelligence and agency. I was similar, but creating a toxic situation which hurt the person I loved the most in the world totally changed my priorities regarding this. If you reading feel similar, “oh this isn’t that important for me, I’m busy” consider unsong and that your robin could fall by your own deluded immature hands, without you even realizing it’s happening.
It’s hard to rationally convince someone of this, but you show some signs of missing stuff similar to not seeing colors re emotions. I’m not certain, but I think you’d would derive a ton of value from talking to a good coach/therapist re: empathy, emotions, possibly relationships. Idk if you know David Yu (co runs sparc) but he’s who showed me the way I wasn’t doing real empathy in a way that was intuitively grokked.
You’re an exceptional alignment researcher but regarding relationships and emotional maturity I think you’re highly underinvested & it’s obvious to ppl who’ve invested more into those (such as me, obsessing over relationships and emotional stuff for the last ~6mo after a horrible breakup, a lot of coaching, meditation etc.)
Note: I’m not just indexing off the empathy posts, it’s also “the value proposition of romantic relationships” post, this is something which most people intuitively feel relatively early on and don’t need to derive. Several other signs too, such as not noticing you were depressed, again I’m not certain! But it’s definitely worth exploring for you under uncertainty.
John, I think you are still missing something regarding empathy and it would be good for you to be open to that possibility. This post is a nice clarification but it still makes me think you don’t get the thing in the same way I used to not get the thing with my ex. “Suspend viewing them as an agent” is the type of thing I also did, and yes, I could model her somewhat, but I was not really getting things emotionally.
I don’t really view anyone as an agent anymore, some are more agenty than others, and wanting to mostly spend time with agenty people is fair, I don’t think it’s healthy to think about it this way.
Sure some people are cats compared to other people. Some neural nets happened to get better training data than others and have better initializations. Disgust and disbelief towards normal people is really not healthy imo, you shouldn’t have to suppress or suspend anything.
Oh I forgot: generally lack of empathy comes from not being comfortable with feeling every feeling. Chris mentions this, it’s a good post.
E.g. without feeling disgust what would John have to feel? Maybe helplessness? If he actually ran their mind in sim properly?
Maybe empathizing properly would mean he has to fix them and he doesn’t want that responsibility?
Idk there can be all sorts of couplings and locally optimal strategies that result in not feeling them and empathizing properly.
Putting yourself in their shoes is not empathy, running their entire mind in (system 1) sim is much closer, and when that fails, just feeling what they’re feeling without adding your reactions on top of it works. Doing real empathy is exceptionally important for romantic relationships imo.
I had a similar empathy problem a year ago, doing inner work around emotions fixed this, now a whole class of interactions I previously system 2 muddled through (such as people wanting comfort over solutions) now are mostly system 1 handled. I cannot stress enough, this is a system 1 problem with a system 1 solution.
I would briefly describe what I used to do as “putting myself in their shoes” (not real empathy!) and what I do now as “letting their experience in”, “being them”, etc.
I haven’t written about this much but Chris describes the same transformation here with a different frame and view about what blocks it.
There’s probably standard psychological/therapy literature on this too, seems like a very common block for people to have. (I say block because learning to do real empathy is mostly unlearning blocks NOT learning a new skill.)
EDIT: it’s also possible John felt fine emotionally and was fully aware of his emotional state and actually was so good at not latching on to emotions that it was highly nontrivial to spot, or some combination. Leaving this comment in case it’s useful for others. I don’t like the tone though, I might’ve been very disassociated as a rationalist (and many are) but it’s not obvious John is from this alone or not.
As a meditator I pay a lot of attention to what emotion I’m feeling in high resolution and the causality between it and my thoughts and actions. I highly recommend this practice. What John describes in “plan predictor predicts failure” is something I notice several times a month & address. It’s 101 stuff when you’re orienting at it from the emotional angle, there’s also a variety of practices I can deploy (feeling emotions, jhanas, many hard to describe mental motions...) to get back to equilibrium and clear thinking & action. This has overall been a bigger update to my effectiveness than the sequences, plausibly my rationality too (I can finally be unbiased instead of trying to correct or pretend I’m not biased!)
Like, when I head you say “your instinctive plan-evaluator may end up with a global negative bias” I’m like, hm, why not just say “if you notice everything feels subtly heavier and like the world has metaphorically lost color” (how I notice it in myself. tbc fully nonverbally). Noticing through patterns of verbal thought also works, but it’s just less data to do metacognition over. You’re noticing correlations and inferring the territory (how you feel) instead of paying attention to how you feel directly (something which can be learned over time by directing attention towards noticing, not instantly)
I may write on this. Till then I highly recommend Joe Hudson’s work, it may require a small amount of woo tolerance, but only small. He coached Sam Altman & other top execs on emotional clarity & fluidity. Extremely good. Requires some practice & willingness to embrace emotional intensity (sometimes locally painful) though.
Biggest failure of the Rat community right now is neglecting emotional work, biggest upgrade to my rationality BY FAR (possibly more than reading the sequences even) has been in feeling all my emotions & letting them move through me till I’m back to clarity. This is feminine coded rationality imo (though for silly cultural reasons). AoA / Joe Hudson is the best resource on all this. He also works with Sama & OAI compute teams (lol).
A few concrete examples from my life.
When I fully feel my anger and let it move through me (apologies for woo terms!) I get back to clarity. My natural thoughts are correct, I don’t need to do galexy brained metacognition & self-correction to maintain the semblance of clear thinking like I used to
When I fully feel my shame & forgive/accept myself it becomes much easier for me to execute long-term self-improvement plans, where I tackle character flaws (e.g. lower conscientiousness than I’d like) with a bunch of 5% improvements, whereas previously I felt too much shame to “sit in the problem” for so long in a gradual improvement approach. Self-acceptance has made self-improvement stuff so much easier to think about clearly.
In general: Emotion biasing me → fully welcome the emotion → no longer biasing me, just integrated information/perspective! It also feels better and is a practice I can do. Highly recommend!
I doubt a more emotionally integrated rationalist community would fix the gender problem, but it would definitely help. I’ve heard girls I know call the Rat/EA cluster “inhumane” and IMO this is getting at something that repulses a lot of people, there’s a serious focus on head over emotions/heart/integrated-bodymind. Not as bad as Spock, but still pretty bad. Some lip service is paid to Focusing and Internal Double Crux (which are emotion-y kind of) but empirically most rats aren’t very emotionally well-integrated, there’s still a “Logical part” which is hammering down more “Irrational” parts, as opposed to working together. And this requires inner work! Not just reading replacing guilt once, for example.
All this relates to insecurity as well, it’s very hard to think rationally when you’re insecure. Preverbal thoughts and expectations will be warped at a deep level by emotional pieces trying to protect you. A lot can be done about that though, Chris is the main pioneer in the emotional security space IMO. Though the AoA/Joe Hudson stuff helps a ton too. All paths to the same goal.
I really don’t like how this post blends supernatural, fictional elements with the practical. The caveats about how wizard power in reality isn’t like wizard power in stories are good, but not sufficient, the actively misleading term continues to warp people’s cognition.
For example, it wasn’t mentioned how technology (I’m not going to call it wizard power) generally requires a lot of coordination and capital (“king power”) to get working, and produce at reasonable price. Magic is sexy and cool because you’re “doing it yourself” whereas technology is a large team effort.
John seems to be warped by this effect ^ notice how he talks about DIY ~entirely in terms of doing stuff alone instead of in large groups, because that’s sexier if you’re an individualist & distrust all large groups. You would not come up with “making your own toothbrush” as something that’s “wizard power” without these cognitive distortions (individualism + magical thinking).
But really my main problem with this isn’t that it lacks some caveats, it’s the general pattern of Rats actively distancing themselves from reality, often in a way with undercurrents of “our thoughts here are special and our community is the best”. I know this isn’t enough to point out the feeling I get for those who don’t share it. It’s hard to see when you’re in the community, but after leaving looking back the distortions are extremely obvious. I might write more about this at some point, or maybe not.
I like this sentiment:
Forget RadVac. I wish for the sort of community which could produce its own COVID vaccine in March 2020, and have a 100-person challenge trial done by the end of April.
I wish there was more action and clear thinking in the rat community, without weird cognitive distortions that are hard to see except from outside.
You decide what is a win or not. If you’re spiraling give yourself wins for getting out of bed, going outside, etc. Morale compounds and you’ll get out of it. This is the biggest thing to do imo. Lower your “standards” temporarily. What we reward ourselves for is a tool to be productive, not an objective measure for how much we did that needs to stay fixed.
I think asking people like Daniel Ingram, Frank Yang, Nick Cammeratta, Shinzen Young, Roger Thisdell, etc. on how they experience pain post awakening is much more productive than debating 2500 year old teachings which have been (mis)translated many times.
Answering my own question, a list of theories I have yet to study that may yield significant insight:
Theory of Heavy-Tailed Self-Regularization (https://weightwatcher.ai/)
Singular learning theory
Neural tangent kernels et. al. (deep learning theory book)
Information theory of deep learning
I wasn’t in a flaming asshole mood, it was a deliberate choice. I think being mean is necessary to accurately communicate vibes & feelings here, I could serialize stuff as “I’m feeling XYZ and think this makes people feel ABC” but this level of serialization won’t activate people’s mirror neurons & have them actually internalize anything.
Unsure if this worked, it definitely increased controversy & engagement but that wasn’t my goal. The goal was to shock one or two people out of bad patterns.
Sorry, I was more criticizing a pattern I see in the community rather than you specifically
However, basically everyone I know who takes innate intelligence as “real and important” is dumber for it. It is very liable to mode collapse into fixed mindsets, and I’ve seen this (imo) happen a lot in the rat community.
(When trying to criticize a vibe / communicate a feeling it’s more easily done with extreme language, serializing loses information. sorry.)
EDIT: I think this comment was overly harsh, leaving it below for reference. The harsh tone was contributed from being slightly burnt out from feeling like many people in EA were viewing me as their potential ender wiggin, and internalizing it.[1]
The people who suggest schemes like what I’m criticizing are all great people who are genuinely trying to help, and likely are.
Sometimes being a child in the machine can be hard though, and while I think I was ~mature and emotionally robust enough to take the world on my shoulders, many others (including adults) aren’t.
An entire school system (or at least an entire network of universities, with university-level funding) focused on Sequences-style rationality in general and AI alignment in particular.
[...]
Genetic engineering, focused-training-from-a-young-age, or other extreme “talent development” setups.
Please stop being a fucking coward speculating on the internet about how child soldiers could solve your problems for you. Enders game is fiction, it would not work in reality, and that isn’t even considering the negative effects on the kids. You aren’t smart enough for galaxy brained plans like this to cause anything other than disaster.
In general rationalists need to get over their fetish for innate intelligence and actually do something instead of making excuses all day. I’ve mingled with good alignment researchers, they aren’t supergeniuses, but they did actually try.
(This whole comment applies to Rationalists generally, not just the OP.)
- ↩︎
I should clarify this mostly wasn’t stuff the atlas program contributed to. Most of the damage was done from my personality + heroic responsibility in rat fiction + dark arts of rationality + death with dignity post. Nor did atlas staff do much to extenuate this, seeing myself as one of the best they could find was most of it, cementing the deep “no one will save you or those you love” feeling.
- ↩︎
Excited to see what comes out of this. I do want to raise attention to this failure mode covered in the sequences. however. I’d love for those who do the program try to bind their results to reality in some way, ideally having a concrete result of how they’re substantively stronger afterwards, and how this replicated with other participants who did the training.
Really nice post. One thing I’m curious about is this line:
This provides some intuitions about what sort of predictor you’d need to get a non-delusional agent—for instance, it should be possible if you simulate the agent’s entire boundary.
I don’t see the connection here? Haven’t read the paper though.
Quick thoughts on creating a anti-human chess engine.
Use maiachess to get a probability distribution over opponent moves based on their ELO. for extra credit fine-tune on that specific player’s past games.
Compute expectiminimax search over maia predictions. Bottom out with stockfish value when going deeper becomes impractical. (For MVP bottom out with stockfish after a couple ply, no need to be fancy.) Also note: We want to maximize (P(win)) not centipawn advantage.
For extra credit, tune hyperparameters via self-play against maia (simulated human). Use lichess players as a validation set.
???
Profit.
Do some inner work (emotional/meditative) to learn to process emotions and feel them without suffering. I highly recommend Joe Hudson’s stuff. Huge missing piece of the rationalist project, we make emotional decisions and can’t just “notice and be unmoved” by what we’re feeling, you need to get back to a secure equilibrium where there’s no big emotional experience you’re avoiding.
Don’t fall entirely into the rationalist frame, it has some great stuff and also a lot of subtle bias that can drive you half insane. Speaking from experience lol.
(Apologies mods if this double sent—in a plane rn about to take off.)