Sorry, I’d like to understand you but I don’t yet; what claim do you think I’m making that seems totally misguided, please?
AnnaSalamon
I’m somehow wanting to clarify the difference between a “bridging heuristic” and solving a bucket error. If a person is to be able to hope for “an AI pause” and “not totalitarianism” at the same time (like cousin_it), they aren’t making a bucket error.
But, they/we might still not know how to try to harness social energy toward “an AI pause” without harnessing social energy toward “let any government that says it’s pro-pause, move toward totalitarianism with AI safety as fig leaf”.
The bridging heuristic I’d want would somehow involve built-in delimiters, so that if a if a social coalition gathered momentum behind the heuristic, the coalition wouldn’t be exploitable—its members would know what lines, if crossed, meant that the people who had co-opted the name of the coalition were no longer fighting for the coalition’s real values.
Like, if a good free speech organization backs Alice’s legal right to say [really dumb/offensive thing], the organization manages to keep track that it’s deal is “defend anybody’s legal right to say anything”, rather than “build coalition for [really dumb/offensive thing]”; it doesn’t get confused and switch to supporting [really dumb/offensive thing]. Adequate ethical heuristics around [good thing X, eg AI safety] would let us build social momentum toward [X] without it getting co-opted by [bad things that try to say they’re X].
- Ability to notice and respect self boundaries feels particularly important to me.
That seems right.
I wish I had a clearer notion of what “self” means, here.
I tried asking myself “What [skills / character traits / etc] might reduce risk of psychosis, or might indicate a lack of vulnerability to psychosis, while also being good?”
(The “while also being good” criterion is meant to rule out things such as “almost never changing one’s mind about anything major” that for all I know might be a protective factor, but that I don’t want for myself or for other people I care about.)
I restricted myself to longer-term traits. (That is: I’m imagining “psychosis” as a thing that happens when *both* (a) a person has weak structures in some way; and (b) a person has high short-term stress on those structures, eg from having had a major life change recently or having taken a psychedelic or something. I’m trying to brainstorm traits that would help with (a), controlling for (b).)
It actually hadn’t occurred to me to ask myself this question before, so thank you Adele. (By contrast, I had put effort into reducing (b) in cases where someone is already in a more mildly psychosis-like direction, eg the first aid stuff I mentioned earlier. )
—
My current brainstorm:
(1) The thing Nathaniel Brandon calls “self-esteem,” and gives exercises for developing in Six Pillars of Self-esteem. (Note that this is a much cooler than than what my elementary school teachers seemed to mean by the word.)
(2) The ability to work on long-term projects successfully for a long time. (Whatever that’s made of.)
(3) The ability to maintain long-term friendships and collaborations. (Whatever that’s made of.)
(4) The ability to notice / tune into and respect other peoples’ boundaries (or organizations’ boundaries, or etc). Where by a “boundary” I mean: (a) stuff the person doesn’t consent to, that common practice or natural law says they’re the authority about (e.g. “I’m not okay with you touching my hand”; “I’m not willing to participate in conversations where I’m interrupted a lot”) OR (b) stuff that’ll disable the person’s usual modes/safeguards/protections/conscious-choosing-powers (?except in unusually wholesome cases of enthusiastic consent).
(4) Anything good that allows people to have a check of some sort on local illusions or local impulses. Eg:
(a) Submission to to patterns of ethical conduct or religious practice held by a community or long-standing tradition; (okay sometimes this one seems bad to me, but not always or not purely-bad, and I think this legit confers mental stability sometimes)
(b) Having good long-term friends or family whose views you take seriously;
(c) Regularly practicing and valuing any trade/craft/hobby/skill that is full of feedback loops from the physical world
(d) Having a personal code or a set of personal principles that one doesn’t lightly change (Ray Dalio talks about this)
(e) Somehow regularly contacting a “sense of perspective.” (Eg I think long walks in nature give this to some people)
(5) Tempo stuff: Getting regular sleep, regular exercise, having deep predictable rhythms to one’s life (eg times of day for eating vs for not-eating; times of week for working vs for not-working; times of year for seeing extended family and times for reflecting). Having a long memory, and caring about thoughts and purposes that extend across time.
(6) Embeddedness in a larger world, eg
(a) Having much contact with the weather, eg from working outdoors;
(b) Being needed in a concrete, daily way for something that obviously matters, eg having a dog who needs you to feed and walk them, or having a job where people obviously need you.
I’ll do this; thank you. In general please don’t assume I’ve done all the obvious things (in any domain); it’s easy to miss stuff and cheap to read unneeded advice briefly.
I’ll try here to summarize (my guess at) your views, Adele. Please let me know what I’m getting right and wrong. And also if there are points you care about that I left out.
I think you think:
(1) Psychotic episodes are quite bad for people when they happen.
(2) They happen a lot more (than gen population base rates) around the rationalists.
(2a) They also happen a lot more (than gen population base rates) among “the kinds of people we attract.” You’re not sure whether we’re above the base rate for “the kinds of people who would be likely to end up here.” You also don’t care much about that question.
(3) There are probably things we as a community can tractably do to significantly reduce the number of psychotic episodes, in a way that is good or not-bad for our goals overall.
(4) People such as Brent caused/cause psychotic episodes sometimes, or increase their rate in people with risk factors or something.
(5) You’re not sure whether CFAR workshops were more psychosis-risky than other parts of the rationalist community.
(6) You think CFAR leadership, and leadership of the rationality community broadly, had and has a duty to try to reduce the number of psychotic episodes in the rationalist community at large, not just events happening at / directly related to CFAR workshops.
(6b) You also think CFAR leadership failed to perform this duty.
(7) You think you can see something of the mechanisms whereby psyches sometimes have psychotic episodes, and that this view affords some angles for helping prevent such episodes.
(8) Separately from “7”, you think psychotic episodes are in some way related to poor epistemics (e.g., psychotic people form really false models of a lot of basic things), and you think it should probably be possible to create “rationality techniques” or “cogsec techniques” or something that simultaneously improve most peoples’ overall epistemics, and reduce peoples’ vulnerability to psychosis.
Thanks; fixed.
CFAR now has an X.com account, https://x.com/CFARonX. If you happen to be up for following us on there, it might help convince X.com that we’re an actual organization and not a spambot, which would be nice for us.
(Weirdly, we “upgraded” to a paid account and it responded to this by freezing our ability to edit our profile photo or handle until verified, which I wish I’d anticipated.)
You’re right. Oops!
I added a footnote above modifying our request to “when it’s easy/convenient.” Eg as mattmacdermott notes below, we can at least use it as a tagline (“Signed, Anna from A Center for …”).
I have now updated the website, so feel free to stop ignoring it. (There are still some changes we’re planning to make sometime in the next month or so, eg adding an FAQ and more staff book picks and the ability to take coaching clients. But the current website should be accurate, if a bit spartan. If you notice something wrong on it, we do want to know.)
I appreciate you taking the time to engage with me here, I imagine this must be a pretty frustrating conversation for you in some ways. Thank you.
No, I mean, I do honestly appreciate you engaging, and my grudgingness is gone now that we aren’t putting the long-winded version under the post about pilot workshops (and I don’t mind if you later put some short comments there). Not frustrating. Thanks.
And please feel free to be as persistent or detailed or whatever as you have any inclination toward.
(To give a bit more context on why I appreciate it: my best guess is that old CFAR workshops did both a lot of good, and a significant amount of damage, by which I mostly don’t mean psychosis, I mostly mean smaller kinds of damage to peoples’ thinking habits or to ways the social fabric could’ve formed. A load-bearing piece of my hope of doing better this time is to try to have everything visible unless we have a good reason not to (a “good reason” like [personal privacy of a person who isn’t in power], hence why I’m not naming the specific people who had manic/psychotic episodes; not like [wanting CFAR not to look bad]), and to try to set up a context where people really do share concerns and thoughts. I’m not wholly sure how to do that, but I’m pretty sure you’re helping here.)
I’ll have more comments tomorrow or sometime.
I look forward to seeing your post. I’d also like to see some of the raw data you’re working from if it seems easy and not-bad to share it with me.
Hmm… I’m not sure that meaning is a particularly salient differences between mormons and rationalists to me. You could say both groups strive for bringing about a world where Goodness wins and people become masters of planetary-level resources. The community/social-fabric thing seems like the main difference to me (and would apply to WW2 England).
I mean, fair. But meaning in WW2 England is shared, supported, kept in many peoples’ heads so that if it goes a bit wonky in yours you can easily reload the standard version from everybody else, and it’s been debugged until it recommends fairly sane stable socially-accepted courses of action? And meaning around the rationalists is individual and variable.
Added: I just realized that perhaps Adele just wanted this thread to be between Adele/Anna. Oops, if so.
I’d like comments from all interested parties, and I’m pretty sure Adele would too! She started it on my post about the new pilot CFAR workshops, and I asked if she’d move it here, but she mentioned wanting more people to engage, and you (or others) talking seems great for that.
See context in our original thread.
The reason I expect things to be worse if the modification is pushed on a person to any degree, is because I figure our brains/minds often know what they’re doing, and have some sort of “healthy” process for changing that doesn’t usually involve a psychotic episode. It seems more likely to me that our brains/minds will get update in a way-that-causes-trouble if some outside force is pressuring or otherwise messing with them.
One experience my attention has lingered on, re: what’s up with the bay area rationality community and psychosis:
In ~2018, as I mentioned in the original thread, a person had a psychotic episode at or shortly after attending a CFAR thing. I met his mom some weeks later. She was Catholic, and from a more rural or small-town-y area where she and most people she knew had stable worldviews and social fabrics, in a way that seemed to me like the opposite of the bay area.
She… was pleased to hear I was married, asked with trepidation whether she could ask if I was monogamous, was pleased to hear I was, and asked with trepidation whether my husband and I had kids (and was less-heartened to hear I didn’t). I think she was trying to figure out whether it was possible for a person to have a normal, healthy, wholesome life while being part of this community.She visibly had a great deal of reflective distance from her choices of actions—she had the ability “not to believe everything she thought”, as Eliezer would put it, and also not to act out every impulse she had, or to blurt out every thought. I came away believing that that sort of [stable ego and cohesive self and reflective distance from one’s impulses—don’t have a great conceptualization here] was the opposite of being a “crazy person”. And that somehow most people I knew in the bay area were half-way to crazy, from her POV—we weren’t literally walking down the street talking to ourselves and getting flagged by police as crazy, but there was something in common.
Am I making any sense here?
My hypothesis for why the psychosis thing is the case is that it has to do with drastic modification of self-image.
I’m interested in hearing more about the causes of this hypothesis. My own guess is that sudden changes to the self-image cause psychosis more than other sudden psychological change, but that all rapid psychological change will tend to cause it to some extent. I also share the prediction (or maybe for you it was an observation) that you wrote in our original thread: “It seems to be a lot worse if this modification was pushed on them to any degree. “
The reasons for my own prediction are:
1) My working model of psychosis is “lack of a stable/intact ego”, where my working model of an “ego” is “the thing you can use to predict your own actions so as to make successful multi-step plans, such as ‘I will buy pasta, so that I can make it on Thursday for our guests.’”
2) Self-image seems quite related to this sort of ego.
3) Nonetheless, recreational drugs of all sorts, such as alcohol seem to sometimes cause psychosis (not just psychedelics), so … I guess I tend to think that any old psychological change sometimes triggers psychosis.3b) Also, if it’s true that reading philosophy books sometimes triggers psychosis (as I mentioned my friend’s psychiatrist saying, in the original thread), that seems to me probably better modeled by “change in how one parses the world” rather than by “change in self-image”? (not sure)
4) Relatedly, maybe: people say psychosis was at unusually low levels in England in WW2, perhaps because of the shared society-level meaning (“we are at war, we are on a team together, your work matters”). And you say your Mormon ward as a kid didn’t have much psychosis. I tend to think (but haven’t checked, and am not sure) that places with unusually coherent social fabric, and people who have strong ecology around them and have had a chance to build up their self-image slowly and in deep dialog with everything around them, would have relatively low psychosis, and that rapid psychological change of any sort (not only to the self-image) would tend to mess with this.
Epistemic status of all this: hobbyist speculation, nobody bet your mental health on it please.
I’m also interested in why you say CFAR leadership has not responded appropriately. I think we mostly have, though not always.
Thanks. I would love to hear more about your data/experiences, since I used to be quite plugged into the more “mainstream” parts of the bay area rationalist community, and would guess I heard about a majority of sufficiently bad mental health events from 2009-2019 in that community, but I left the bay area when Covid hit and have been mostly unplugged from detailed/broad-spectrum community gossip since then.
If this is meant to be a characterization of my past actions (or those of any other CFAR team member, for that matter), I disagree with it. I did and do feel a duty of care. When I had particular agendas about eg AI safety recruiting that were relevant to my interactions with a participant in particular, I generally shared it with them. The thing I tried to describe as a mistake, and to change, was about an orientation to “narrative syncing” and general community set up; it was not about the deontology owed to CFAR participants as individuals.