Should rationalists be spiritual /​ Spirituality as overcoming delusion

Kaj_Sotala

I just started thinking about what I would write to someone who disagreed with me on the claim “Rationalists would be better off if they were more spiritual/​religious”, and for this I’d need to define what I mean by “spiritual”.

Here are some things that I would classify under “spirituality”:

  • Rationalist Solstices (based on what I’ve read about them, not actually having been in one)

  • Meditation, especially the kind that shows you new things about the way your mind works

  • Some forms of therapy, especially ones that help you notice blindspots or significantly reframe your experience or relationship to yourself or the world (e.g. parts work where you first shift to perceiving yourself as being made of parts, and then to seeing those parts with love)

  • Devoting yourself to the practice of some virtue, especially if it is done from a stance of something like “devotion”, “surrender” or “service”

  • Intentionally practicing ways of seeing that put you in a mindstate of something like awe, sacredness, or loving-kindness; e.g. my take on sacredness

(Something that is explicitly not included: anything that requires you to adopt actual literal false beliefs, though I’m probably somewhat less strict about what counts as a true/​false belief than some rationalists are. I don’t endorse self-deception but I do endorse poetic, non-literal and mythic ways of looking, e.g. the way that rationalists may mythically personify “Moloch” while still being fully aware of the fact that the personification is not actual literal fact.)

I have the sense that although these may seem like very different things, there is actually a common core to them.

Something like:

  • Humans seem to be evolved for other- and self-deception in numerous ways, and not just the ways you would normally think of.

  • For example, there are systematic confusions about the nature of the self and suffering that Buddhism is pointing at, with minds being seemingly hardwired to e.g. resist/​avoid unpleasant sensations and experience that as the way to overcome suffering, when that’s actually what causes suffering.

  • Part of the systematic confusion seem to be related to social programming; believing that you are unable to do certain things (e.g. defy your parents/​boss) so that you would be unable to do that, and you would fit in better to society.

  • At the same time, even as some of that delusion is trying to make you fit in better, some of it is also trying to make you act in more antisocial ways. E.g. various hurtful behaviors that arise from the mistaken belief that you need something from the outside world to feel fundamentally okay about yourself and that hurting others is the only way to get that okayness.

  • For whatever reason, it looks like when these kinds of delusions are removed, people gravitate towards being compassionate, loving, etc.; as if something like universal love (said the cactus person) and compassion was the motivation that remained when everything distorting from it was removed.

    • There doesn’t seem to be any strong a priori reason for why our minds had to evolve this way, even if I do have a very handwavy sketch of why this might have happened; I want to be explicit that this is a very surprising and counterintuitive claim, that I would also have been very skeptical about if I hadn’t seen it myself! Still, it seems to me like it would be true for most people in the limit, excluding maybe literal psychopaths whom I don’t have a good model of.

  • All of the practices that I have classified under “spirituality” act to either see the functioning of your mind more clearly and pierce through these kinds of delusions or to put you into mind-states where the influence of such delusions is reduced and you shift closer to operating from a stance of compassion or service to something greater.

  • Important caveat: the tails do come apart; I don’t think that all spiritual practice necessarily leads to being loving and compassionate. Like you can also use spiritual practices to cultivate something like a warrior archetype to better conquer all of your foes, and consider that your epitome of practice without ever getting to the universal love part. I think I am intentionally No True Scotsmaning a little bit when I define the prototypical spirituality as the kind of spirituality that I like the most.

    • To be clear, I don’t think there’s anything wrong with cultivating a warrior archetype; I strive to cultivate one myself. The thing that I’m now talking about is cultivating a warrior mentality without also cultivating something like kindness and compassion.

  • Another thing I still want to explicitly tag is something around… spiritual practice as intentionally taking on different views and ways of looking, which may involve construction as well as deconstruction. Like, the way I expressed things so far is pretty strongly in the direction of deconstruction. But Rob Burbea has this wonderful talk where in the beginning he says that… to attempt a translation of his words into rationalist language, if you are stuck in just one way of interpreting things, then you may see that interpretation as the only one reality. But if you can try on many different ways of seeing the same thing, you can come to see that no single one of the one reality. (Scott Alexander had a blog post where he mentioned that the thing that got him to stop believing in history cranks was reading many different history cranks who all had very convincing but mutually exclusive theories of history. Kind of like that—if you can play with many different ways of seeing the world and noticing how they all seem convincing, then they may all become less convincing as a result.)

  • So intentionally trying new ways of looking can help you see through some delusions relating to automatically accepting your own interpretations as reality. (Helping you better see what the Buddhists call emptiness.) And you can then use some of those new ways of looking to craft your mindstates in a direction that’s closer to where you want them to be. And while you can use that to move toward universal love and compassion, not everybody does and it’s also totally possible to take some other path.

romeostevensit

Awesome. Some preliminary, orienting remarks. I think you already know a bunch of this but helpful for an audience and might suggest some dovetailing trailheads.

I got involved in these practices partially because I wanted to suffer less and partially because I wanted to investigate Buddhist claims and whether they had any relevance to the epistemology project in general or AI in particular. Buddhists are one of the sets of people who claim that we’re fundamentally deluded about what’s really going on with this human experience, which is big if true. This also seems like the reason that claims that Buddhism isn’t quite a religion ring a little hollow often. I was drawn to pragmatic dharma because the people there seemed to be some of the few making this explicit distinction, that it is desirable to be able to discern the difference between claims about experience and claims about reality. It’s not really clear what could, even in principle, count as evidence for a lot of metaphysical claims. This is also where we have some heartening material from the religion itself, since there are famously a bunch of dialogues with the Buddha where he refuses to answer metaphysical questions because they are malformed or otherwise ‘not even wrong.’

Long story short, my experience so far has been that

On the suffering front: does what it says

On the AI front: not a central or crucial consideration most likely

On the epistemology front: mostly pragmatically I think. Here it’s the pretty straightforward story that the practices Buddhism recommends can help people attain process transparency for their internal representations and thoughts, and this can lead to noticing some pretty obvious-in-hindsight errors.

Oh I forgot one important claim which you occasionally see float around: might increase IQ. On this front I have a take but it’s pretty much conjecture on my part. Practice seemed to make me answer about 1 additional question correctly on Ravens, which both isn’t nothing and is also within test retest variance. I’d dismiss this more heavily if it didn’t correspond with a subjective effect related to the above mentioned transparency. It feels from the inside like I do reason just a bit better by having a meta process that monitors what sorts of things processes are doing and whether they are promising avenues. I think spending many many hours trying to train that process lead it to operate slightly better than it did before. This is a small enough effect that I don’t think anyone would want to try to train it with that much effort if there weren’t any other benefits, but it seems worth noting.

Getting back to the big-if-true claims, it seems to me that they mostly map pretty cleanly from ‘investigating the universe’ to ‘investigating how a human nervous system renders the universe.’ Which doesn’t mean I’m a hardcore materialist trying to convince/​reassure other hardcore materialists, I’m more a neutral monist and agnostic on extending these sorts of things to broader claims.

So we have the more restricted claims: there are these practices, there are some benefits, the practices lead to the benefits for some fraction of people who try them in just the right way for long enough, sometimes with periods of poor feedback. Is this positive expected value for people to push themselves towards? There are periods right after big breakthroughs with a lot of evangelical energy, but overall I’d say I’ve become more conservative on this question over time. That is to say, a lot of this material seems mostly relevant more for those who can’t help but be strongly drawn towards it due to experiences they are spontaneously having etc. Now, does that mean I don’t think that key decision makers in a bunch of orgs and research roles aren’t making obvious errors that Buddhist practice could hypothetically help with? No, but I also don’t think these people are going to get motivated to update their priorities with such long feedback loops. Like, the strongest evidence is the actual post hoc insights, that there are connections between certain mental representations and negative side effects that are very non-obvious.

I also think, per the no true scotsman pitfall sort of point, that there is a substantial downside to go along with these purported benefits. Which is that these practices are often aimed at introducing a certain representational flexibility. That sounds great in theory, but in practice, the degrees of freedom people have in their representations is part of the circuitry they have for balancing over and under fitting and that too much naive practice, just like too much psychedelic use can put someone off balance. I have to keep an eye on this in myself. I have very flexible representations such that if I’m not careful I can just have a reasonable sounding answer for everything and lose track of how the moving parts map to each other such that I expect conservation of expected evidence and solid predictive power. I also think this is why these practices and psychedelics are contraindicated for people who already have over or underfitting problems.

Another potential issue is that becoming more internally aligned seemingly can increase the shear with evolutionary alignment. I.e. Buddhism might shred fertility. Though I don’t know the effect on the margin for the sort of person who is seeking it out in the first place vs a randomly selected person.

I do think that coming to the conclusion that people are fundamentally oriented towards beneficial actions is partially an epistemic question and partially an ontology question but that it is a pragmatically useful and in most ways approximately true way to see things that helps undermine a bunch of superficially attractive but ultimately flawed blackpill attractors.

Kaj_Sotala

Great! Unsurprisingly I agree with a lot of this, though hopefully not so much that I couldn’t say anything on top of it. :)

I read you as focusing on the question of “should rationalists be spiritual”, so let me elaborate a bit more on that. So far I defined what I mean by spirituality but didn’t directly answer that question.

The original question said that “rationalists should be more spiritual/​religious”, which is a bit ambiguous; I think the position I’d be willing to stand behind would be something like “on average, other things being even, rationalists/​people would benefit from having a non-zero amount of spirituality”. Whether any given rationalist would benefit from it overall depends on their psychological profile, their opportunity costs, and what they value in general (e.g. how much they care about suffering less).

Looking at the benefits with the same breakdown as you:

On the suffering front: In the best case does exactly what is advertised, yeah. A significant reduction in suffering has definitely been my experience. Though progress has also been very uneven and at times ran into serious blockers that required trying a lot of different things to overcome. Then finally I would find something that worked or someone skillful enough to be able to help, and of course, I still have plenty of unsolved issues. So I think that while a massive reduction in suffering is the best case, it’s also possible to be unlucky and not get anywhere despite a massive amount of genuine trying.

On the AI front: I feel like doing this kind of practice gives intuitions on how human values work, and that might be valuable for thinking about something like value learning. I’ve sometimes had the sense from looking at some value alignment proposals that they are grounded in a somewhat mistaken view of how values work, in a way that this kind of an understanding would help counteract. But this is hard to express more precisely, since it’s often just been a sense of “this doesn’t feel quite right” rather than a more exact argument. The preference fulfillment hypothesis is the closest to I’ve gotten to being able to formulate an intuition derived from these things that I suspect would be useful as a concrete research direction, but one would need a lot more actual technical expertise to build on it. And without me having that expertise, it’s hard for me to tell whether I’m just totally mistaken when I think that it might be a useful direction.

On the epistemology front: Agree with this being mostly pragmatically useful. So something like, becoming more able to notice that various greyed-out options in your daily life don’t need to be that. And seeing through various emotional tangles that cause people practical difficulties, e.g. the kinds of things that cause people to think less clearly in the context of intimate relationships.

And agree that there are also epistemic dangers. I think that another thing that may happen is if you have previously just believed in science for social reasons, without having a good epistemology, and then you notice that you had only believed in science for social reasons. It’s then easy to make the jump from this to something like “all science is just motivated reasoning that’s no more valid than anything else and I can just believe in anything I want”, especially if you have been exposed to memes suggesting something like this.

A related thing is that a lot of people have some degree of STEM trauma that makes them hostile toward scientific thinking. And while this stuff can clean up some of your trauma and motivated reasoning, it’s not a magic bullet that would be guaranteed to get all of it. I think that it by default brings to awareness the kinds of issues that are generating a lot of friction within the mind-system or that have been imperfectly covered up, but a lot of stuff that happens to cohere with the mind’s existing defenses and belief structures can go unnoticed unless you explicitly go looking for it. And you’re not guaranteed to find even when you do go looking for it.

One more consideration that comes to mind is that there can be a certain temptation to go all reversed stupidity. Especially if you listen to some more hardcore scientism, you’ll hear views like meditation is all fake, all of spirituality is dumb, anything that people say about “energies” is nonsense, and so forth. Then if you start practicing all of this and notice that actually a lot of the “woo” claims are true, you might jump to the opposite extreme of thinking that the STEM people are wrong about everything while the “woo” people are right about everything. Which of course isn’t right either, e.g. even though there is something real to “energies”, it still doesn’t mean what some of the woo-er woo people claim. Things still add up to normality, in the sense that the all actual correct predictions that science has come up with are still valid.

This also touches upon the topic that, while I mentioned quite a few different practices above, different practices of course have different effects. I generally tend to think that most people, at least if they had to choose, would be better off starting with parts work than something like classical meditation. I think parts work tends to be quicker and more effective at bringing about the specific kind of change that the typical person actually wants. It also doesn’t have that much in the way of the epistemic risks that I could see.

But of course, it has its drawbacks as well. Just parts work alone is going to run into diminishing returns eventually, and it won’t do as much to reduce suffering in general—it only targets specific causes of suffering. But some people seem to be lucky in that most of their suffering comes from a relatively small number of big traumas that parts work can effectively deal with.

The question of evolutionary alignment is interesting. It seems to me like the effect could go either way. One might become more content not having children, or one could also eliminate various traumas from modern culture that reduce fertility.

But these practices do often break evolutionary mechanisms intended to socialize you more strongly into society. E.g. various types of self-hate and other misery look to me like mechanisms for making sure you conform to the surrounding culture. And shedding that can be risky for the individual, since it makes you more willing to consider doing things that other people despise, which means that other people will also be more likely to hurt you for doing those things.

And as a consequence of that, a lot of the more renunciate-type Buddhism—that probably does reduce fertility—has been selected to produce adherents that don’t cause much problems for society, but rather just go to their monasteries and meditate and let the existing social order continue intact. Since the schools that removed social conditioning and also empowered practitioners to upend the social order, tended to get targeted for destruction. (Or at least so I suspect and some people on Twitter said “yes this did happen” when I speculated this out loud.) So we have a situation where some of the schools and their corresponding practices and ideologies have been selected by cultural evolution to reduce alignment to biological evolution. But one can still do practices that do less of that.

Which goes to highlight the way that a lot of this stuff tries to present itself as objective, “just investigate the mind and you’ll come to these conclusions on your own”. To some extent that’s certainly true, but there’s still a lot of room for ideological influences to sneak in and produce different kinds of outcomes. And you should put some thought into what exactly it is that you are aiming at.

romeostevensit

One of the ways I’ve become more conservative about recommending Buddhist practice is also a way I’ve become more conservative about being gung-ho on startups. With more experience it became clearer to me that personality factors influence how things go a lot. After getting a cohort of monks killed, the Buddha famously started adopting different practice recommendations for people with different personality inclinations. People often seem drawn to practices that double down on their existing strengths, and avoiding their existing weaknesses. And some of that is totally fine, people should lean into their strengths somewhat. But too much winds up putting people in a cul-de-sac/​attractor of their own making where moving anywhere else feels bad because they’re so good at what they’ve trained already.

I agree that emotional integration/​parts work is often helpful for triaging what is going on with practices. For myself, there was a sharp uptick in motivation that came about from a moment of emotional clarity where I noticed that I couldn’t, in practice, choose something and then give it a dedicated try for, say, a month, and that this lack of capability would have likely been baffling to past generations. This bothered me enough to take on Shinzen style noting for a month, which was my biggest inflection point.

WRT the AI question: like many, I’ve updated from the GPT era that value loading probably won’t be the bottleneck for alignment. This was further strengthened by my own practice experience leading me to believe (a la the brahmaviharas) that human values are probably simpler/​more compressible than I previously thought. I think the AI will understand out values and just not care. The same way we could, as humans, figure out much better what ants, or mice, or pigs, or dogs really want, but instead we just breed them for our own use in ways they certainly would object to if uplifted. The small probability I place on Buddhism having something crucial to say about AI revolves more around understanding of intentionality. Buddhism seems to hold that conscious moments all contain intentional content, which is considered an open question in western philosophy. Better understanding here might allow us to create neuromorphic architectures that care about anything at all.

WRT rats going more spiritual on the margin. I think there’s a more restricted claim I would make that it is possible to improve the relationship between system 1 and 2, where my understanding from having worked on this and gotten some feedback from S1 is that it agrees that they are different types of processing and the ideal situation is for each of them to handle the kind of computation they are best at. By default, and scaling up with neuroticism, S2 tries to run the show and this is an obviously bad configuration. I think this is closer to what CFAR eventually tried to turn into rather than ‘make S2 run better’ but that it still mostly didn’t work? Leverage came at this from a totally different angle and hit known-to-yogis failure modes but didn’t have enough awareness to know that there are people who know about this sort of failure mode. And again, I think these sorts of failures are pretty unsurprising for people with the rat load out, who typically have a fraught history with authorities telling them to just have faith in weird, unsubstantiated claims. Besides emotional integration, I also often coach such people to ignore the more mysterian claims and focus on legible skill development in areas that they already have some sense will improve things in mundane ways. That these things can also be turned to contemplative practice is a bonus that is safer on fail than the tacit frames of panacea or grandeur that commonly float around contemplative communities.

Kaj_Sotala

Re: AI—yeah, “AI will understand our values and not care” is of course the default outcome, and it does also look to me like human values are simpler than I used to think or that a lot of people think. Something like a relatively basic set of universal human needs, with lots of complicated strategies then building on top of that. The concern that I had was that if this is true and people working on value loading don’t realize it, then they might assume that those strategies are our real values, and try to design AIs in such a way that the AIs end up optimizing on the wrong level.

On the other hand, it’s plausible that one won’t need Buddhism to get to the right conclusion. Something like Shard Theory is already pointing in a similar reaction and I don’t think that familiarity with spirituality is the only reason why it’s gotten a broadly positive reception. And it looks to me like modern AI is taking a broadly similar path to intelligence as human brains are—including some results suggesting that subagent type approaches lead to better reasoning [1 2] - so it could easily be that anyone working on AGI will eventually notice that simple intrinsic values can give rise to complex-looking instrumental ones and then generalize from that to human values.

I’m guessing that intentionality of mind-moments might be similar, in that if human minds are evolved that way then it might be because that’s one of the most natural ways of building a mind, and then AI designers might discover that as well.

Re: “S1 and S2 are each best at different types of computation and better let them each do what they’re best at, but by default S2 tends to steal the show”—agree. Also WEIRD societies seem to push people more into S2 mode, I suspect in part as a social control mechanism (learn to ignore your body and your emotions in order to sit still in your chair and listen to your teacher) but also in part because S2 knowledge is easier to transmit scalably and that’s how we got modern science and technology.

I’ve seen people make the point that in the Buddha’s time, people were a lot more embodied by default than we are now, so meditation instructions like “notice your thoughts” might have landed very differently than they do for modern Westerners. It might be that back in that time, people could have benefited from practices that strengthened their S2 relative to their S1, whereas today that’s the exact opposite of what a lot of people need.

Generally the questions of “who needs what” and “what are the individual bottlenecks” are really interesting to me. I think it’s common for people to find something that works really well for them—for me this was Internal Family Systems—and then find themselves confused about why it hasn’t taken over the world yet and why so few people have heard of it. I was just recently talking with someone who found Coherence Therapy similarly effective for him and who had a similar confusion. Here’s the list of hypotheses I offered him:

  • It does work amazingly well and is starting to gain popularity, but there are lots of things claiming to be miracle cures and most of them are fake, so people tend to reasonably discount claims about its effectiveness which slows down adoption

  • You can somewhat learn it from a book but it’s best learned from someone who already knows it, further slowing down the spread

  • There are a lot of people it works really well for but there are also quite a few who don’t have sufficient access to their emotions for it to work and struggle with it

  • Even for people who do find it to work amazingly well, there’s depth to the issues; if the beliefs behind their issues are buried sufficiently deep in the psyche, it may be that the surface-level manifestations are easily treated but then the deeper ones start to bubble up and undo some of the progress, and it’s hard to access those deep-level ones. (This describes me, I found IFS amazingly useful and after several years I was less of a mess but still basically a mess; don’t think I would’ve gotten where I am now with it alone. Also honestly still feel like a mess about some pretty major things.)

  • Something that could be explained by the above factors but I’m not totally sure that it is, many therapies seem to get less effective as they become more mainstream? See e.g. https://​​slatestarcodex.com/​​2019/​​11/​​20/​​book-review-all-therapy-books/​​

Also @pjeby had some good discussion on blindspots and how it’s impossible to mass-produce effective change techniques: “It’s a bit like the Interdict of Merlin in HPMOR: successful techniques can only be passed from one living mind to another, or independently discovered. You can write down your notes and share the story of your discovery, and then people either discover it again for themselves, learn it from interacting with someone who knows, or go through the motions and cargo-cult it.

I get the sense that you have better models of the individual factors and common problems that pop up for would-be change-mass-producers than I do, and would be very interested in hearing more about them. E.g. I don’t know much about what happened with Leverage and what were the known-to-yogis failure modes that they ran into.

Also yeah, just telling people to ignore the mysterian claims and to focus on the more clear and legible benefits seems like a good strategy to me. In my own emotional coaching, I don’t usually bring up any of the more esoteric things unless the client indicates some active interest and openness for it. And even if someone was interested in that stuff, I tend to agree with the take that it’s best to just do the practices for the sake of their immediate or at least short-term benefits rather than doing something like focusing on vague and often oversold claims of enlightenment or such.

romeostevensit

In generality, my guess is that humans retain prioritizing homeostasis over trying to enact large scale changes over themselves. Consider that the LW community is probably 99th percentile openness and it is still a struggle for most of us. I think there is some sort of threshold of self organization past which kicking the system has positive expected value, whereas below that threshold it has negative expected value. I think the following model comes from Adyashanti who has been teaching for several decades, and may have been repeated by Shinzen Young at some point. Basically that they had seen three classes of people motivated enough to overcome status quo inertia: those that were suffering a lot, those that had a strong compulsion to understand the truth about themselves/​the world, and those who lucked in to a pleasant, jhana based path (i.e. those who found jhana strangely easy). Lwers might object that they have strong motivation to find the truth, but when push comes to shove, most people back off when it seems like the search process is going to potentially destroy their ability to orient to the things they care about.

More specifically, parts of the process can be ontologically destabilizing. When people get a glimmer of emptiness in their practice, as Chongyam Trungpa says, they ‘bolt off the cushion and run.’ So, e.g. if you suddenly realize that your personal constructions/​notions of truth, beauty, love, safety, non-suffering, etc are predicated on confused and empty categories, that can generate an existential terror. To put it in more traditional terms: the ego knows on some level that it is empty of ‘real’ (non reified) existence and has the belief that it needs to avoid that fact if it is to accomplish any of the goals for which it was constructed (navigating the world successfully such that you don’t literally die). This is a means-ends confusion and also some reasoning from consequences. The idea that lots of other people have gone through this process often isn’t reassuring enough in the actual critical moment. Some of my friends have proposed the model that you need to be in physical contact with a community of such people in order to get your CNS to relax enough and that this creates a threshold effect where awakening is rare in most times and places but you can get concentrated clusters of it popping up when the conditions are right.

So far, this picture is mostly one of passive ignorance. There’s also the issue of active adversarial patterns. Patterns that don’t conform to human values have fewer constraints and thus can mutate in a broader variety of ways, some of which make them more adaptive. In the Buddhist model of human psychology, we are by default colonized by parasitic thought patterns, though I guess in some cases, like the aforementioned fertility increasing religious memes, they should be thought of as symbiotes with a tradeoff, such as degrading the hosts’ episteme. In this view, healthy societies would be ones with more vertical transmission of inoculating memes from a young age. Things like the western educational model can be thought of as a generalized medium for cultural memetic transmission, and we don’t have a good model of parasitic capture. We are essentially pre-germ theory on memes.

Kaj_Sotala

Trying to maintain homeostasis makes sense—I’ve repeatedly had the experience of “ugh I just want to be done with all this healing stuff”, but then been forced to keep at it due to repeatedly running into new issues.

I’m guessing that another part of it also something like, often the kinds of improvements that these practices offer don’t sound all that tempting to one’s schemas. If your trauma says that you need to be rich and successful to be loved, and someone comes and says that this thing can heal your need to be rich and successful, then that’s not going to be very appealing. In that example the cure sounds kind of opposed to the schema. But it can be more subtle too, like the proposal just not having any shape that the schema could make sense of and then not being emotionally appealing.

An analogy that comes to mind is, people have all kinds of hobbies. I know there are some hobbies that some people find great that I have no interest in. But not because I’d have ever tried them, their description just doesn’t sound appealing to me. It may not sound actively anti-appealing either, but it just lacks anything that would feel emotionally resonant. (And then occasionally I might try out one of them and go “this is great why didn’t anyone tell me before” when lots of people had been praising it all along.)

Over time one can learn a kind of a meta-expectation of feeling better after applying these techniques to whatever the most recent thing is, but it takes time to get there and develop that trust. Especially if the technique is one of the less gentle ones, and more in the category of “just look directly into your suffering and override the desire to flinch away (or to run away screaming)”. Been doing something like that recently and the first instinctive reaction still tends to be “ugh no I don’t want to be with this”, until the memory of “yes but just bear with it” kicks in.

I think the part you said about things being actively destabilizing and the person realizing their categories are meaningless etc., is more of an issue for the paths focusing on structure rather than content? I know it’s possible to end up in insight territory through parts work too (I think I may have gotten at least one of my clients to some stage of enlightenment without even trying to), but it seems more rare.

When you talk about actively adversarial patterns, adversarial in what sense? I read you as suggesting something like “unaligned with human values”, but that’s a bit hard for me to interpret in this context since our values also arise from the patterns themselves.

romeostevensit

I think the adversarial thing gets to a claim I have: Values don’t arise from the patterns. One of the things Buddhism deconfuses is where in the stack values come from. By default, small selves (the thing we normally identify with moment to moment) have means-ends confusion. They were spun up as a strategy for pursuing goals, but they don’t cleanly track instrumental vs terminal considerations and they aren’t super aware of other selves or the selfing process. So they think that any disruption in their strategy is a disruption to the possibility of the goal. Since their job is to get you things you want, including staying alive, and since telling you that their survival is your survival is also memetically fit, we should be unsurprised that this feels like an existential threat.

Rather, values are part of the design desiderata that underlie why we have parts instantiating patterns in the first place. “Values” are something like compressions over a large number of (sample complexity reduction?) heuristics that are both learned and in the priors. Another way of viewing them is as pointers at transfer learning.

The patterns themselves are unaligned due to a design constraint. Due to the way human psychology seems to function, we can simultaneously become convinced that something is both necessary and forbidden. E.g. when the only route to mating or career success passes through anti-social patterns. When this happens, patterns get instantiated with non-transparency as a design constraint (see: The Elephant in the Brain). These patterns then get built up into complexes of strategies which then can fail to notice e.g. the means-ends confusion that means they don’t actually need to be fighting the other parts. So it’s a path dependent arising of misunderstanding. But in the meantime, the actual moment to moment experience is of something with horrible side effects that isn’t endorsed on reflection, like ‘maliciousness’.

Let’s take an example that’s a bit closer to home. On becoming convinced of a particular theory of epistemics, a person might be

  1. unaware that they’re actually committing to a ‘bundle-of-hypotheses’ as Quine put it.

  2. tacitly perceive things that threaten the foundations of any of this bundle as an attack on the very notion of truth, knowledge, or (emotionally) beauty, human survival etc.

  3. mysteriously fail to be able to follow what seems structurally like straight forward reasoning when it is extrapolated as heading directly for any core tensions that hold this world view up, e.g. the uncomputability of hypothesis space, consequentialist cluelessness, and indexical problems in Bayesian reasoning.

Once you escape from such a cluster it becomes really obvious that your thinking about it was previously distorted, and the lack of this distortion is experienced as palpable relief, as mental and emotional contortion takes energy. From within the cluster, contorting is virtuous, and the shape has some sort of platonically formal beauty, never mind that it breaks human bodies to contort that way.

romeostevensit

So, for example, stuff like people being convinced that they’ve chosen their preferences when studies on children show an ‘identify formation window’ in which things they happen to succeed at during the window become what they later report are their favorite activities, hobbies, career aspirations, personality etc. and that this holds even when the activities in question are demonstrably random, leading some children to strongly prefer them and some to strongly dis-prefer them despite this randomness.

Kaj_Sotala

Right, okay. So when I said that values also arise from the patterns, I was thinking of something like the thing in Core Transformation where the initial layers of what the part is trying to do are the “pattern”, but then the actual value it’s trying to achieve is whatever its Core State is.

And the thought I had about that was that while getting the Core State may radically transform the pattern, that still doesn’t lead to a lack of action. Or, even people credibly claiming very high stages of enlightenment still have pretty distinct personalities and personal preferences, which is different from what you’d expect if patterns were entirely distinct from values and everyone had the same core preferences and values. (Culadasa, when he was still alive, was pretty distinctly different from Daniel Ingram.)

My model has been something like, patterns aren’t the ultimate values in ways that people might think, but a lot of preferences and personality structures still grow from those original deep values. And they persist even after one sees through deep confusions because why not, you still need some basis for action to operate in the world, and then one might as well call those values too.

But I guess that you might say that whatever one has after going through CT for that pattern is an aligned pattern, and that the unaligned patterns are the ones with the means-end confusion that seems to put parts in conflict with one another?

The bit about feeling that something is both necessary and forbidden also reminds me of this bit from No More Mr. Nice Guy:

For Nice Guys, trying to become needless and wantless was a primary way of trying to cope with their childhood abandonment experiences. Since it was when they had the most needs that they felt the most abandoned, they believed it was their needs that drove people away.

These helpless little boys concluded that if they could eliminate or hide all of their needs, then no one would abandon them. They also convinced themselves that if they didn’t have needs, it wouldn’t hurt so bad when their needs weren’t met. Not only did they learn early not to expect to get their needs met, but also that their very survival seemed to depend on appearing not to have needs.

This created an unsolvable bind: these helpless little boys could not totally repress their needs and stay alive, and they could not meet their needs on their own. The only logical solution was to try to appear to be needless and wantless while trying to get their needs met in indirect and covert ways. [...]

In addition to using ineffective strategies to get their needs met, Nice Guys are terrible receivers. Since getting their needs met contradicts their childhood paradigms, Nice Guys are extremely uncomfortable when they actually do get what they want. Though most Nice Guys have a difficult time grasping this concept, they are terrified of getting what they really want and will go to extreme measures to make sure they don’t. Nice Guys carry out this unconscious agenda by connecting with needy or unavailable people, operating from an unspoken agenda, being unclear and indirect, pushing people away, and sabotaging. [...]

All of these strategies pretty much ensure that the Nice Guy won’t have to experience the fear, shame, or anxiety that might get triggered if he actually allowed someone to focus on his needs. [...]

All Nice Guys are faced with a dilemma: How can they keep the fact that they have needs hidden, but still create situations in which they have some hope of getting their needs met?

In order to accomplish this seemingly impossible goal, Nice Guys utilize covert contracts. These unconscious, unspoken agreements are the primary way that Nice Guys interact with the world around them. Almost everything a Nice Guy does represents some manifestation of a covert contract.

The Nice Guy’s covert contract is simply this: I will do this——(fill in the blank) for you, so that you will do this——(fill in the blank) for me. We will both act as if we have no awareness of this contract.

Most of us have had the experience of leaning over and whispering in our lover’s ear, “I love you.” We then wait expectantly for our beloved to respond with, “I love you, too.” This is an example of a covert contract in which a person gives to get. Saying “I love you” to hear “I love you, too” in return is the basic way Nice Guys go about trying to get all of their needs met. [...]

In reality, the primary paradigm of the Nice Guy Syndrome is nothing more than a big covert contract with life. [...] One of the most common ways Nice Guys use covert contracts to try to meet their needs is through caretaking. [...]

Reese, a graphic designer in his late twenties, is a good example of the extremes to which Nice Guys caretake in their intimate relationships. Reese, who is gay, lamented in one of his therapy sessions, “Why can’t I find a partner who gives as much back to me as I give to him?” He went on to describe how all of his boyfriends seemed to be takers and that he always did all of the giving.

romeostevensit

A brief note on Core Transformation since most won’t be familiar with it. CT (not to be confused with Connection Theory) is a self therapy modality whereby one investigates the motivations of our different goals, drives, values, etc via a chain of instrumental goals in order to purportedly uncover terminal goals, or at least more terminal goals. The author of the process found that doing this explicitly, in addition to being quite pleasant, generated a lot of insight into why goals were fighting with each other and that this fighting was not due to value differences but generally due to disagreements about strategies and opacity about strategy side effects for other goals. I wrote a bit about it here.

Anyway, I guess we might say that even if a pattern starts out as arbitrary, if you then build a series of increasingly coherent and influenced-by-you series of choices on top of that pattern, then it becomes more and more reasonable to identify with it. In general, I would guess people could make a lot of progress by investigating their objections to their current projection of what practice is supposed to look like and get in touch with what’s deeply good about all these objections, eg the assumption that identification is supposed to be held as bad, or the source of problems in the Buddhist frame.

There’s something important here where, while there can be adversarial patterns, people have too many false positives for this sort of thing, leading to excess internal conflict. That these errors tend to obscure the actual adversarial patterns does not seem to be random happenstance. E.g. patterns that derive energy from the internal conflict and therefore have an incentive to maintain it.

Kaj_Sotala

E.g. patterns that derive energy from the internal conflict and therefore have an incentive to maintain it.

That reminds me of a recent tweet where Qiaochu Yuan suggested that part of why being attracted to someone bad for you might be so sticky because it “goes viral” inside your psyche. There’s competing attraction and aversion, and then at some point secondary reactions like aversion to the attraction, possible shame about getting into a bad situation again, and so forth. Any of those getting triggered is likely to bring up all the others in a cascade and then they keep reactivating each other.

Also this bit from Motivational Interviewing, 2nd ed:

An avoidance–avoidance conflict [...] involves having to choose between two evils—two (or more) possibilities, each of which involves significant fear, pain, embarrassment, or other negative consequences. This is being caught “between a rock and a hard place” or “between the devil and the deep blue sea.” The important choice factors are all negative, things to be avoided. In a congested city or on a large university campus, for example, one may have to choose between parking far away from one’s destination and parking closer but risking an expensive parking ticket.

Still more vexing is the approach–avoidance type. This kind of conflict seems to have special potential for keeping people stuck and creating considerable stress. Here the person is both attracted to and repelled by the same object. The term “fatal attraction” has been used to describe this kind of love affair: “I can’t live with it, and I can’t live without it.” In alternating cycles, the person indulges in and then resists the behavior (relationship, person, object). The resulting yo-yo effect is a classic characteristic of the approach–avoidance conflict. Ambivalent cognitions, emotions, and behaviors are a normal part of any approach–avoidance conflict situation. Many wry examples are found in American jazz and country and western song lyrics (e.g., “I’m so miserable without you, it’s almost like you’re here”). A 1930s Fletcher Henderson tune quipped, “My sweet tooth says I want to, but my wisdom tooth says no.”

The grand champion of conflicts, however, is the double approach– avoidance type, wherein a person is torn between two alternatives (lovers, lifestyles, etc.), each of which has both enticing positive and powerful negative aspects. As the person moves closer to option A, the disadvantages of A become more salient and the advantages of B seem brighter. When the person then turns and starts moving toward B, the down sides of B become clearer and A starts looking more attractive.

And there seems to be a really common pattern that happens with compulsions and shame. A person will first feel bad about something, then feel compelled to indulge in some vice (excessive alcohol, food, gaming, porn, whatever) that would drown out those bad feelings. But then they feel shame about giving in to their addictive behavior, and that makes them feel worse, and then the need to avoid that feeling of shame makes the compulsion to engage in the vice even stronger. Rinse and repeat.

romeostevensit

There’s also a temporary relief from the shame in the giving in, as consciousness contracts around the addiction object. Concentration, and the sensory clarity that comes with it, Shinzen Young claims, is one of our terminal values. A large number of behaviors are oriented around various states of flow, and if not recognized the reward will seem intrinsically bound to the particular activity.

Kaj_Sotala

Yeah that sounds right, being able to engross myself in something—even on just a momentary level, such as when a phone collapses my awareness—is just really rewarding.

Were the things I mentioned the kinds of things you had in mind when you mentioned patterns that have an incentive to maintain internal conflict?

romeostevensit

Yes, there’s also the rabbit hole of how this interacts with the whole conflict vs mistake thing, but I don’t know if we want to go into that.

Kaj_Sotala

Yeah I think we probably have enough rabbit holes as it is. :D

There’s something important here where, while there can be adversarial patterns, people have too many false positives for this sort of thing, leading to excess internal conflict. That these errors tend to obscure the actual adversarial patterns does not seem to be random happenstance.

What kinds of false positives did you have in mind here?

romeostevensit

The false positives on adversariality mostly just refers back to the means-ends confusion. We assume adversariality of goals, when it’s just a tradeoff between means. If two parts have obfuscated strategy stacks 7 levels deep, any of those levels can snarl with each other. The spaghetti code that you can read out on discovering how to is such a mess that people sometimes don’t think they can get anywhere. In my experience a dozen hours carefully going through it does make substantial headway. Nick Cammarata explicitly pointed out a pattern here where people want to cut Gordian Knots and have lots of big insights, but most people would be better served by a more Marie Kondo approach of cleaning up all the minor messes that you already know how to clean up. David Allen also emphasizes this. It becomes obvious if you engage deeply with either of those systems that they’re actually about learning emotional skills, not analytic or executive ones. In the rope analogy, this gives you enough slack to work on the bigger knots.

Kaj_Sotala

Right, that makes sense to me.

I’ve been finding it difficult to keep coming back to this conversation recently and there’s quite a bit of good stuff here already, so let’s ship what we have so far and see if people would have any comments that’d stimulate more discussion.