I like this point, particularly the “controlling vs opening” bit. I believe I’ve seen this happen, in a fairly internally-grown way in people within the wider rationalist millieu. I believe I’ve also seen (mostly via hearsay, so, error bars) a more interpersonal “high stakes, therefore [tolerate bad/crazy things that someone else in the group claims has some chance at helping somehow with AI]” happen in several different quasi-cults on the outskirts of the rationalists.
Fear is part of where controlling (vs opening) dynamics come from, sometimes, I think. (In principle, one can have an intellectual stance of “there’s something precious that may be lost here” without the emotion of fear; it’s the emotion that I think inclines people toward the narrowing/controlling dynamic.) I also think there’s something in the notion that we should aspire toward being “Bayesian agents” that lends itself toward controlling dynamics (Joe Carlsmith gets at some of this in his excellent “Otherness and control in the age of AI” sequence, IMO.)
I agree Focusing helps some, when done well. (Occasionally it even helps dramatically.) It’s not just a CFAR thing; we got it from Gendlin, and his student Ann Weiser Cornell and her students are excellent at it, are unrelated to the rationalists, and offer sessions and courses that’re excellent IMO. I also think nature walks and/or exercise help some people, as does eg having a dog, doing concrete things that matter for other people even if they’re small, etc. Stuff that helps people regain a grounding in how to care about normal things.
I suspect also it would be good to have a better conceptual handle on the whole thing. (I tried with my Emergencies post, and it’s better than not having tried, but it … more like argued “here’s why it’s counterproductive to be in a controlling/panicky way about AI risk” and did not provide “here’s some actually accessible way to do something else”.)
Nice, excited that the control vs opening thing clicked for you, I’m pretty happy with that frame and haven’t figured out how to broadly communicate it well yet.
It’s not just a CFAR thing; we got it from Gendlin, and his student Ann Weiser Cornell and her students are excellent at it, are unrelated to the rationalists, and offer sessions and courses that’re excellent IMO.
Yup, I’ve got a ton of benefit from doing AWC’s Foundations on Facilitating Focusing course, and vast benefits from reading her book many times. CFAR stuff in the sense of being the direct memetic source for me, though IDC feels similar flavoured and is an original.
though IDC feels similar flavoured and is an original.
Awkwardly, while IDC is indeed similar-flavored and original to CFAR, I eventually campaigned (successfully) to get it out of our workshops because I believe, based on multiple anecdotes, that IDC tends to produce less health rather than more, especially if used frequently. AWC believes Focusing should only be used for dialog between a part and the whole (the “Self”), and I now believe she is correct there.
Huh, curious about your models of the failure modes here, having found IDC pretty excellent in myself and others and not run into issues I’d tracked as downstream of it.
Actually, let’s take a guess first… parts which are not grounded in self-attributes building channels to each other can create messy dynamics with more tug of wars in the background or tactics which complexify the situation?
Plus less practice at having a central self, and less cohesive narrative/more reifying fragmentation as possible extra dynamics?
Your guess above, plus: the person’s “main/egoic part”, who has have mastered far-mode reasoning and the rationalist/Bayesian toolkit, and who is out to “listen patiently to the dumb near-mode parts that foolishly want to do things other than save the world,” can in some people, with social “support” from outside them, help those parts to overpower other bits of the psyche in ways that’re more like tricking and less like “tug of wars”, without realizing they’re doing this.
I like this point, particularly the “controlling vs opening” bit. I believe I’ve seen this happen, in a fairly internally-grown way in people within the wider rationalist millieu. I believe I’ve also seen (mostly via hearsay, so, error bars) a more interpersonal “high stakes, therefore [tolerate bad/crazy things that someone else in the group claims has some chance at helping somehow with AI]” happen in several different quasi-cults on the outskirts of the rationalists.
Fear is part of where controlling (vs opening) dynamics come from, sometimes, I think. (In principle, one can have an intellectual stance of “there’s something precious that may be lost here” without the emotion of fear; it’s the emotion that I think inclines people toward the narrowing/controlling dynamic.) I also think there’s something in the notion that we should aspire toward being “Bayesian agents” that lends itself toward controlling dynamics (Joe Carlsmith gets at some of this in his excellent “Otherness and control in the age of AI” sequence, IMO.)
I agree Focusing helps some, when done well. (Occasionally it even helps dramatically.) It’s not just a CFAR thing; we got it from Gendlin, and his student Ann Weiser Cornell and her students are excellent at it, are unrelated to the rationalists, and offer sessions and courses that’re excellent IMO. I also think nature walks and/or exercise help some people, as does eg having a dog, doing concrete things that matter for other people even if they’re small, etc. Stuff that helps people regain a grounding in how to care about normal things.
I suspect also it would be good to have a better conceptual handle on the whole thing. (I tried with my Emergencies post, and it’s better than not having tried, but it … more like argued “here’s why it’s counterproductive to be in a controlling/panicky way about AI risk” and did not provide “here’s some actually accessible way to do something else”.)
Nice, excited that the control vs opening thing clicked for you, I’m pretty happy with that frame and haven’t figured out how to broadly communicate it well yet.
Yup, I’ve got a ton of benefit from doing AWC’s Foundations on Facilitating Focusing course, and vast benefits from reading her book many times. CFAR stuff in the sense of being the direct memetic source for me, though IDC feels similar flavoured and is an original.
Awkwardly, while IDC is indeed similar-flavored and original to CFAR, I eventually campaigned (successfully) to get it out of our workshops because I believe, based on multiple anecdotes, that IDC tends to produce less health rather than more, especially if used frequently. AWC believes Focusing should only be used for dialog between a part and the whole (the “Self”), and I now believe she is correct there.
Huh, curious about your models of the failure modes here, having found IDC pretty excellent in myself and others and not run into issues I’d tracked as downstream of it.
Actually, let’s take a guess first… parts which are not grounded in self-attributes building channels to each other can create messy dynamics with more tug of wars in the background or tactics which complexify the situation?
Plus less practice at having a central self, and less cohesive narrative/more reifying fragmentation as possible extra dynamics?
Your guess above, plus: the person’s “main/egoic part”, who has have mastered far-mode reasoning and the rationalist/Bayesian toolkit, and who is out to “listen patiently to the dumb near-mode parts that foolishly want to do things other than save the world,” can in some people, with social “support” from outside them, help those parts to overpower other bits of the psyche in ways that’re more like tricking and less like “tug of wars”, without realizing they’re doing this.