Trying to do a cooperative, substantive reply. Seems like openness and straightforwardness are the best way here.
I found the above to be a mix of surprising and believable. I was at CFAR full-time from Oct 2015 to Oct 2018, and in charge of the mainline workshops specifically for about the last two of those three years.
At least four people
This surprises me. I don’t know what the bar for “worked in some capacity with the CFAR/MIRI team” is. For instance, while at CFAR, I had very little attention on the comings-and-goings at MIRI, a much larger organization, and also CFAR had a habit of using five or ten volunteers at a time for workshops, month in and month out. So this could be intended to convey something like “out of the 500 people closest to both orgs.” If it’s meant to imply “four people who would have worked for more than 20 hours directly with Duncan during his three years at CFAR,” then I am completely at a loss; I can’t think of any such person who I am aware had a psychotic break.
Psychedelic use was common among the leadership
This also surprises me. I do not recall ever either directly encountering or hearing open discussions of psychedelic use while at CFAR. It was mentioned nonzero times in the abstract, as were any of dozens of other things (CFAR’s colloquia wandered far and wide). But while I can think of a time when a CFAR staff member spoke quietly and reservedly while not at work about an experience with psychedelics, I was not in touch with any such common institutional casualness, or “this is cool and people should do it” vibe, between 10⁄15 and 10⁄18. I am not sure if this means it happened at a different time, or happened out of my sight, or what; I’m just reporting that I myself did not pick up on the described vibe at all. In fact, I can think of several times that psychedelic use was mentioned by participants or volunteers at workshops, and was immediately discouraged by staff members along the lines of “look, that’s the sort of thing people might have personal experiences with, but it’s very much not at all in line with what we’re trying to do or convey here.”
Debugging sessions
This … did not surprise me. It is more extreme than I would have described and more extreme than I experienced or believe I participated in/perpetuated, but it is not to the point where I feel a “pshhhh, come on.” I will state for the record that I recall very very few debugging sessions between me and any less-senior staff member in my three years (<5), and absolutely none where I was the one pushing for debugging to happen (as opposed to someone like Eli Tyre (who I believe would not mind being named) asking for help working through something or other).
Relatedly, the organization uses a technique called goal factoring
This one misses the mark entirely, as far as I can see. Goal factoring, at least in the 2015-2018 window, bears no resemblance whatsoever EDIT: little resemblance to things like Connection Theory or Charting. It’s a pretty straightforward process of “think about what you want, think about its individual properties, and do a brainstorming session on how you might get each individual property on its own before returning to the birds’-eye view and making a new plan.” There’s nothing psych-oriented about it except in the very general sense of “what kinds of good things were you hoping to get, when you applied to med school?”
No one at CFAR was required to use the double-crux conversational technique
This one feels within the realm of the believable. The poster describes a more blatant adversarial atmosphere than I experienced, but I did sometimes have the feeling, myself, that people would double crux when that was useful to their goals and not when it wasn’t, and I can well imagine someone else having a worse experience than I did. I had some frustrating arguments in which it took more than an hour to establish the relevance of e.g. someone having agreed in writing to show up to a thing and then not doing so. However, in my own personal experience, this didn’t seem any worse than what most non-Hufflepuff humans do most of the time; it was more “depressingly failing to be better than normal” than “notably bad.” If someone had asked me to make a list of the top ten things I did not like at CFAR, or thought were toxic, this would not have made the list from my own personal point of view.
There were required sessions of a social/relational practice called circling
This is close to my experience. Notably, there was a moment in CFAR’s history when it felt like the staff had developed a deep and justified rapport, and was able to safely have conversations on extremely tricky and intimate topics. Then a number of new hires were just—dropped in, sans orientation, and there was an explicit expectation that I/we go on being just as vulnerable and trusting as we had been the day before. I boycotted those circles for several months before tolerance-for-boycott ran out and I was told I had to start coming again because it was a part of the job. I disagree with “The whole point of circling is to create a state of emotional vulnerability and openness in the person who is being circled,” but I don’t disagree that this is often the effect, and I don’t disagree with “This often required rank-and-file members to be emotionally vulnerable to the leadership who perhaps didn’t actually have their best interests at heart.”
The overall effect of all this debugging and circling was that it was hard to maintain the privacy and integrity of your mind if you were a rank-and-file employee at CFAR.
This also has the ring of truth, though I’m actually somewhat confused by the rank-and-file comment. Without trying to pin down or out this person, there were various periods at CFAR in which the organization was more (or less) flat and egalitarian, so there were many times (including much of my own time there) when it wouldn’t make sense to say that “rank-and-file employees” was a category that existed. However, if I think about the times when egalitarianism was at its lowest, and people had the widest diversity of power and responsibility, those times did roughly correspond with high degrees of circling and one-on-one potentially head-melty conversations.
Pressure to debug at work
This bullet did not resonate with me at all, but I want to be clear that that’s not me saying “no way.” Just that I did not experience this, and do not recall hearing this complaint, and do not recall participating in the kind of close debugging that I would expect to create this feeling. I had my own complaints about work/life boundaries, but for me personally they didn’t lie in “I can’t get away from the circles and the debugging.” (I reiterate that there wasn’t much debugging at all in my own experience, and all of that solicited by people wanting specific, limited help with specific, limited problems (as opposed to people virtuously producing desire-to-be-debugged in response to perceived incentives to do so, as claimed in some of the Leverage stuff).)
The longer you stayed with the organization, the more it felt like your family and friends on the outside could not understand the problems facing the world, because they lacked access to the reasoning tools and intellectual leaders you had access to. This led to a deep sense of alienation from the rest of society. Team members ended up spending most of their time around other members and looking down on outsiders as “normies”.
This zero percent matches my experience, enough that I consider this the strongest piece of evidence that this person and I did not overlap, or had significant disoverlap. The other alternative being that I just swam in a different subcultural stream. But my relationships with friends and family utterly disconnected from the Bay Area and the EA movement and the rationalist community only broadened and strengthened during my time at CFAR.
There was a rarity narrative around being part of the only organization trying to “actually figure things out”, ignoring other organizations in the ecosystem working on AI safety and rationality and other communities with epistemic merit. CFAR/MIRI perpetuated the sense that there was nowhere worthwhile to go if you left the organization.
Comments like this make me go “ick” at the conflation between CFAR and MIRI, which are extremely different institutions with extremely different internal cultures (I have worked at each). But if I limit this comment to just my experience at CFAR—yes, this existed, and bothered me, and I can recall several instances of frustratedly trying to push back on exactly this sort of mentality. e.g. I had a disagreement with a staff member who claimed that the Bay Area rationalist community had some surprising-to-me percentage of the world’s agentic power (it might have been 1%, it might have been 10%; either way, it struck me as way too high). That being said, that staff member and I had a cordial and relatively productive disagreement. It’s possible that I was placed highly enough in the hierarchy that I wasn’t subject to the kind of pressure that this person’s account seems to imply.
There was a rarity narrative around the sharpness of Anna’s critical thinking skills, which made it so that if Anna knew everything you knew about a concern and disagreed with you, there was a lot of social pressure to defer to her judgment.
I did not have this experience. I did, however, have the experience of something like “if Anna thinks your new idea for a class (or whatever) is interesting, it will somehow flourish and there will be lots of discussion, and if Anna thinks it’s boring or trivial, then you’ll be perfectly able to carry on tinkering with it by yourself, or if you can convince anyone else that it’s interesting.” I felt personally grumpy about the different amount of water I felt different ideas got; some I thought were unpromising got much more excitement than some I thought were really important.
However, in my own personal experience/my personal story, this is neither a) Anna’s fault, nor b) anything other than business as usual? Like, I did not experience, at all, any attempt from Anna to cultivate some kind of mystique, or to try to swing other people around behind her. Quite the contrary—I multiple times saw Anna try pretty damn hard to get people to unanchor from her own impressions or reactions, and I certainly don’t blame her for being honest about what she found promising, even where I disagreed. My sense was that the stuff I was grumpy about was just the result of individuals freely deferring to Anna’s judgment, or just the way that vibes and enthusiasm spread in monkey social groups. I never felt like, for instance, Anna (or anyone on Anna’s behalf) was trying to suffocate one of my ideas. It just felt like my ideas had a steeper hill in front of them, due to no individual’s conscious choices. Moloch, not malice.
This made it so that Anna’s update towards short timelines caused a herd of employees and volunteers to defer to her judgment almost overnight...however, Anna also put substantial pressure on members of the team to act as if shorter timelines were the case.
Did not experience. Do not rule out, but did not experience. Can neither confirm nor deny.
The later iterations of the team idolized the founders...no new techniques have been developed in quite a few years
Yes. This bothered me no end, and I both sparked and joined several attempts to get new curriculum development initiatives off the ground. None of these were particularly successful, and I consider it really really bad that no substantially new CFAR content was published in my last year (or, to the best of my knowledge, in the three years since). However, to be clear, I also did not experience any institutional resistance to the idea of new development. It just simply wasn’t prioritized on a mission level and therefore didn’t cohere.
There was rampant use of narrative warfare (called “narrativemancy” within the organization) by leadership to cast aspersions and blame on employees and each other. There was frequent non-ironic use of magical and narrative schemas which involved comparing situations to fairy-tales or myths and then drawing conclusions about those situations with high confidence. The narrativemancer would operate by casting various members of the group into roles and then using the narrative arc of the story to make predictions about how the relationship dynamics of the people involved would play out. There were usually obvious controlling motives behind the narrative framings being employed, but the framings were hard to escape for most employees.
This reads as outright false to me, like the kind of story you’d read about in a clickbait tabloid that overheard enough words to fabricate something but didn’t actually speak to anyone on the ground.
The closest I can think of to what might have sparked the above description is Val’s theorizing on narrativemancy and the social web? But this mainly played out in scattered colloquium talks that left me, at least, mostly nonplussed. To the extent that there was occasional push toward non-ironic use of magical schemas, I explicitly and vigorously pushed back (I had deep misgivings about even tiny, nascent signs of woo within the org). But I saw nothing that resembles “people acting as narrativemancers” or telling stories based on clichés or genre tropes. I definitely never told such stories myself, and I never heard one told about me or around me.
That being said, the same caveats apply: this could have been at a different time, or in a different subculture within the org, or something I just missed. I am not saying “this anecdote is impossible.” I’m just saying ????
I will say this, though: to the extent that the above description is accurate, that’s deeply fucked. Like, I want to agree wholeheartedly with the poster’s distaste for the described situation, separate from my ability to evaluate whether it took place. That’s exactly the sort of thing you go to a “center for applied rationality” to escape, in my book.
Generally there was a lack of clarity around which set of rules were at play at CFAR events and gatherings: those of a private gathering or those of the workplace. It seemed that the decision of which rules were at play were made ad hoc depending on the person’s aesthetic / presentation, their social standing, and the offense being considered. In the absence of clear standards people ultimately fell back on blame-games and coalitional negotiation to resolve issues instead of using more reasonable approaches.
I do not recognize the vibe of this anecdote, either (can’t think of “offenses” committed or people sitting in judgment; sometimes people didn’t show up on time for meetings? Or there would be personal disagreements between e.g. romantic exes?). However, I will note that CFAR absolutely blurred the line between formal workshop settings, after-workshop parties, and various tiers of alumni events that became more or less intimate depending on who was invited. While I didn’t witness any “I can’t tell what rules apply; am I at work or not?” confusion, it does seem to me that CFAR in particular would be 10x more likely to create that confusion in someone than your standard startup. So: credible?
At such social gatherings you felt uncertain at times if you were enjoying yourself at a party, advocating for yourself in an interview, or defending yourself on trial for a crime. This confusing mixture of possible social expectations disoriented attendees and caught them off-guard giving team members deeper insight into their psyches. No party was just a party.
Again, confusing and not at all in synch with my personal experience. But again: plausible/credible, especially if you add in the fact that I had a relatively secure role and am relatively socially oblivious. I do not find it hard to imagine being a more junior staff member and feeling the anxiety and insecurity described.
I don’t know. I can’t tell how helpful any of my commentary here is. I will state that while CFAR and I have both tried to be relatively polite and hands-off with each other since parting ways, no one ever tried to get me to sign an NDA, or implied that I couldn’t or shouldn’t speak freely about my experiences or opinions. I’ve been operating under the more standard-in-our-society just-don’t-badmouth-your-former-workplace-and-they-won’t-badmouth-you peace treaty, which seems good for all sorts of reasons and didn’t seem unusually strong for CFAR in particular.
Which is to say: I believe myself to be free to speak freely, and I believe myself to be being candid here. I am certainly holding many thoughts and opinions in reserve, but I’m doing so by personal choice and golden-rule policy, and not because of a sense that Bad Things Would (immediately, directly) Happen If I Didn’t.
Like, I want to agree wholeheartedly with the poster’s distaste for the described situation, separate from my ability to evaluate whether it took place.
As a general dynamic, no idea if it was happening here but just to have as a hypothesis, sometimes people selectively follow rules of behavior around people that they expect will seriously disapprove of the behavior. This can be well-intentioned, e.g. simply coming from not wanting to harm people by doing things around them that they don’t like, but could have the unfortunate effect of producing selected reporting: you don’t complain about something if you’re fine with it or if you don’t see it, so the only reports we get are from people who changed their mind (or have some reason to complain about something they don’t actually think is bad). (Also flagging that this is a sort of paranoid hypothesis; IDK how the world is on this dimension, but the Litany of Gendlin seems appropriate. Also it’s by nature harder to test, and therefore prone to the problems that untestable hypotheses have.)
This literally happened with Brent; my current model is that I was (EDIT: quite possibly unconsciously/reflexively/non-deliberately) cultivated as a shield by Brent, in that he much-more-consistently-than-one-would-expect-by-random-chance happened to never grossly misbehave in my sight, and other people, assuming I knew lots of things I didn’t, never just told me about gross misbehaviors that they had witnessed firsthand.
there was a lot of social pressure to defer to her judgment.
Moloch, not malice.
The two stories here fit consistently in a world where Duncan feels less social pressure than others including Phoenix, so that Duncan observes people seeming to act freely but Molochianly, and they experience network-effect social pressure (which looks Molochian, but is maybe best thought of as a separate sort of thing).
I worked for CFAR from 2016 to 2020, and am still somewhat involved.
This description does not reflect my personal experience at all.
And speaking from my view of the organization more generally (not just my direct personal experience): Several bullet points seem flatly false to me. Many of the bullet points have some grain of truth to them, in the sense that they refer to or touch on real things that happened at the org, but then depart wildly from my understanding of events, or (according to me) mischaracterize / distort things severely.
I could go through and respond in more detail, point by point, if that is really necessary, but I would prefer not to do that, since it seems like a lot of exhausting work.
As a sort of free sample / downpayment:
At least four people who did not listen to Michael’s pitch about societal corruption and worked in some capacity with the CFAR/MIRI team had psychotic episodes.
I don’t know who this is referring to. To my knowledge 0 people who are or have been staff at CFAR had a psychotic episode either during or after working at CFAR.
Psychedelic use was common among the leadership of CFAR and spread through imitation, if not actual institutional encouragement, to the rank-and-file. This makes it highly distressing that Michael is being singled out for his drug advocacy by people defending CFAR.
First of all, I think the use of “rank-and-file” throughout the use of this comment is misleading to the point of being dishonest. CFAR has always been a small organization of no more than 10 or 11 people, often flexibly doing multiple roles. The explicit organizational structure involved people having different “hierarchical” relationships depending on context.
In general, different people lead different projects, and the rest of the staff would take “subordinate” roles, in those projects. That is, if Elizabeth is leading a workshop, she would delegate specific responsibilities to me as one of her workshop staff. But in a different context, where I’m leading a project, I might delegate to her, and I might have the final say. (At one point this was an official, structural, policy, with a hierarchy of reporting mapped out on a spreadsheet, but for most of the time I’ve been there it has been much more organic than that.)
But these hierarchical arrangements are both transient and do not at all dominate the experience of working for CFAR. Mostly we are and have been a group of pretty independent contributors, with different views about x-risk and rationality and what-CFAR-is-about, who collaborate on specific workshops and (in a somewhat more diffuse way) in maintaining the organization. There is not anything like the hierarchy you typically see in larger organizations, which makes the frequent use of the term “rank and file” seem out of place and disingenuous, to me.
Certainly, Anna was always in a leadership role, in the sense that the staff respected her greatly, and were often willing to defer to her, and at most times there was an Executive Director (ED) in addition to Anna.
That said, I don’t think that Anna, or either of the EDs ever confided to me that they had ever taken psychedelics, even in private. I certainly didn’t feel pressured to do psychedelics, and I don’t see how that practice could have spread by imitation, given that it was never discussed, much less modeled. And there was not anything like “institutional encouragement”.
The only conversations I remember having about psychedelic drugs are the conversations in which we were told that it was one of the topics that we were not to discuss with workshop participants, and a conversation in which Anna strongly stated that psychedelics were destabilizing and implied that they were...generally bad, or at least that being reckless with them was really bad.
Personally, I have never taken any psychoactive drugs aside from nicotine (and some experimentation with caffeine and modafinil, once). This stance was generally respected by CFAR staff. Occasionally, some people (not Anna or either ED) expressed curiosity about or gently ribbed me about my hard-line stance of not drinking alcohol, but in a way that was friendly and respectful of my boundaries. My impression is that Anna more-or-less approves of stance on drugs, without endorsing it as the only or obvious stance.
Debugging sessions with Anna and with other members of the leadership was nigh unavoidable and asymmetric, meaning that while the leadership could avoid getting debugged it was almost impossible to do so as a rank-and-file member. Sometimes Anna described her process as “implanting an engine of desperation” within the people she was debugging deeply. This obviously had lots of ill psychological effects on the people involved, but some of them did seem to find a deeper kind of motivation.
This is false, or at minimum is overly general, in that it does not resemble my experience at all.
My experience:
I could and can easily avoid debugging sessions with Anna. Every interaction that I’ve had with her has been consensual, and she has, to my memory, always respected my boundaries, when I had had enough, or was too tired, or the topic was too sensitive, or whatever. In general, if I say that I don’t want to talk about something, people at CFAR respect that. They might offer care, or help, for if I decided I wanted it, but then they would leave me alone. (Most of the debugging, etc., conversations that I had at CFAR, I explicitly sought out.)
This also didn’t happen that frequently. While I’ve had lots of conversations with Anna, I estimate I’ve had deep “soulful” conversations, or conversations in which she was explicitly teaching me a mental technique...around once every 4 months, on average?
Also, though it has happened somewhat more rarely, I have participated in debugging style conversations with Anna where I was in the “debugger” role.
(By the way, is in CFAR’s context, the “debugger” role is explicitly a role of assistance / midwifery, of helping a person get traction and understanding on some problem, rather than an active role of doing something to or intervening on the person being debugged.
Though I admit that this can still be a role with a lot of power and influence, especially in cases where there is an existing power or status differential. I do think that early in my experience with CFAR, I was to willing to defer to Anna about stuff in general, and might make big changes in my personal direction at her suggestion, despite not really having and inside view of why I should prefer that direction. She and I would both agree, today, that this is bad, though I don’t consider myself to have been majorly harmed by it. I also think it is not that unusual. Young people are often quite influenced by role models that they are impressed by, often without clear-to-them reasons.)
I have never heard the phrase “engine of desperation” before today, though it is true that there was a period in which Anna was interested in a kind of “quiet desperation” that she thought was a effective place to think and act from.
I am aware of some cases of Anna debugging with CFAR staff that seem somewhat more fraught than my own situation, but from what I know of those, they are badly characterized by the above bullet point.
I could go on, and I will if that’s helpful. I think my reaction to these first few bullet points is a broadly representative sample.
Thank you for adding your detailed take/observations.
My own take on some of the details of CFAR that’re discussed in your comment:
Debugging sessions with Anna and with other members of the leadership was nigh unavoidable and asymmetric, meaning that while the leadership could avoid getting debugged it was almost impossible to do so as a rank-and-file member. Sometimes Anna described her process as “implanting an engine of desperation” within the people she was debugging deeply. This obviously had lots of ill psychological effects on the people involved, but some of them did seem to find a deeper kind of motivation.
I think there were serious problems here, though our estimates of the frequencies might differ.
To describe the overall situation in detail:
I often got debugging help from other members of CFAR, but, as noted in the quote, it was voluntary. I picked when and about what and did not feel pressure to do so.
I can think of at least three people at CFAR who had a lot of debugging sort of forced on them (visibly expected as part of their job set-up or of check-in meetings or similar; they didn’t make clear complaints but that is still “sort of forced”), in ways that were large and that seem to me clearly not okay in hindsight. I think lots of other people mostly did not experience this. There are a fair number of people about whom I am not sure or would make an in-between guess. To be clear, I think this was bad (predictably harmful, in ways I didn’t quite get at the time but that e.g. standard ethical guidelines in therapy have long known about), and I regret it and intend to avoid “people doing extensive debugging of those they have direct power over” contexts going forward.
I believe this sort of problem was more present in the early years, and less true as CFAR became older, better structured, somewhat “more professional”, and less centered around me. In particular, I think Pete’s becoming ED helped quite a bit. I also think the current regime (“holocracy”) has basically none of this, and is structured so as to predictably have basically none of this—predictably, since there’s not much in the way of power imbalances now.
It’s plausible I’m wrong about how much of this happened, and how bad it was, in different eras. In particular, it is easy for those in power (e.g., me) to underestimate aspects of how bad it is not to have power; and I did not do much to try to work around the natural blindspot. If anyone wants to undertake a survey of CFAR’s past and present staff on this point (ideally someone folks know and can accurately trust to maintain their anonymity while aggregating their data, say, and then posting the results to LW), I’d be glad to get email addresses for CFAR’s past and present staff for the purpose.
I’m sure I did not describe my process as “implanting an engine of desperation”; I don’t remember that and it doesn’t seem like a way I would choose to describe what I was doing. “Implanting” especially doesn’t. As Eli notes (this hadn’t occurred to me, but might be what you’re thinking of?), I did talk some about trying to get in touch with one’s “quiet desperation”, and referenced Pink Floyd’s song “Time” and “the mass of men lead lives of quiet desperation” and developed concepts around that; but this was about accessing a thing that was already there, not “implanting” a thing. I also led many people in “internal double cruxes around existential risk”, which often caused fairly big reactions as people viscerally noticed “we might all die.”
Relatedly, the organization uses a technique called goal factoring during debugging which was in large part inspired by Geoff Anders’ Connection Theory and was actually taught by Geoff at CFAR workshops at some point. This means that CFAR debugging in many ways resembles Leverage’s debugging and the similarity in naming isn’t just a coincidence of terms.
I disagree with this point overall.
Goal-Factoring was first called “use fungibility”, a technique I taught within a class called “microeconomics 1” at the CFAR 2012 minicamps prior to Geoff doing any teaching. It was also discussed at times in some form at the old SingInst visiting fellows program, IIRC.
Geoff developed it, and taught it at many CFAR workshops in early years (2013-2014, I think). The choice that it was Goal-Factoring that Geoff (was asked to teach? wanted to teach? I don’t actually remember; probably both?) was I think partly to do with its resemblance to the beginning/repeated basic move in Connection Theory.
No one at CFAR was required to use the double-crux conversational technique for reaching agreement, but if a rank-and-file member refused to they were treated as if they were being intellectually dishonest, while if a leader refused to they were just exercising their right to avoid double-cruxing. While I believe the technique is epistemically beneficial, the uneven demands on when it is used biases outcomes of conversations.
My guess is that there were asymmetries like this, and that they were important, and that they were not worse than most organizations (though that’s really not the right benchmark). Insofar as you have experience at other organizations (e.g. mainstream tech companies or whatnot), or have friends with such experience who you can ask questions of, I am curious how you think they compare.
On my own list of “things I would do really differently if I was back in 2012 starting CFAR again”, the top-ranked item is probably:
Share information widely among staff, rather than (mostly unconsciously/not-that-endorsedly) using lack-of-information-sharing to try to control people and outcomes.
Do consider myself to have some duty to explain decisions and reply to questions. Not “before acting”, because the show must go on and attempts to reach consensus would be endless. And not “with others as an authority that can prevent me from acting if they don’t agree.” But yes with a sincere attempt to communicate my actual beliefs and causes of actions, and to hear others’ replies, insofar as time permits.
I don’t think I did worse than typical organizations in the wider world, on the above points.
I’m honestly uncertain how much this is/isn’t related to the quoted complaint.
There were required sessions of a social/relational practice called circling (which kind of has a cult of its own). It should be noted that circling as a practice is meant to be egalitarian and symmetric, but circling within the context of CFAR had a weird power dynamic because subordinates would circle with the organizational leaders. The whole point of circling is to create a state of emotional vulnerability and openness in the person who is being circled. This often required rank-and-file members to be emotionally vulnerable to the leadership who perhaps didn’t actually have their best interests at heart.
Duncan’s reply here is probably more accurate to the actual situation at CFAR than mine would be. (I wrote much of the previous paragraphs before seeing his, but endorsing Duncan’s on this here seems best.) If Pete wants to weigh in I would also take his perspective quite seriously here. I don’t quite remember some of the details.
As Duncan noted, “creating a state of emotional vulnerability and openness” is really not supposed to be the point of circling, but it is a thing that happens pretty often and that a person might not know how to avoid.
The point of circling IMO is to break all the fourth walls that conversations often skirt around, let the subtext or manner in which the conversation is being done be made explicit text, and let it all thereby be looked at together.
A different thing that I in hindsight think was an error (that I already had on my explicit list of “things to do differently going forward”, and had mentioned in this light to a few people) was using circling in the way we did at AIRCS workshops, where some folks were there to try to get jobs. My current view, as mentioned a bit above, is that something pretty powerfully bad sometimes happens when a person accesses bits of their insides (in the way that e.g. therapy or some self-help techniques lead people to) while also believing they need to please an external party who is looking at them and has power over them.
(My guess is that well-facilitated circling is fine at AIRCS-like programs that are less directly recruiting-oriented. Also that circling at AIRCS had huge upsides. This is a can of worms I don’t plan to go into right now, in the middle of this comment reply, but flagging it to make my above paragraph not overgenralized-from.)
The overall effect of all this debugging and circling was that it was hard to maintain the privacy and integrity of your mind if you were a rank-and-file employee at CFAR.
I believe this was your experience, and am sorry. My non-confident guess is that some others experienced this and most didn’t, and that the impact on folks’ mental privacy was considerably more invasive than a standard workplace would’ve been, and that the impact on folks’ integrity was probably less bad than my guess at many mainstream workplace’s impact but still a lot worse than the CFAR we ought to aim for.
Personally I am not much trying to maintain the privacy of my own mind at this point, but I am certainly trying to maintain its integrity, and I think being debugged by people with power over me would not be good for that.
The longer you stayed with the organization, the more it felt like your family and friends on the outside could not understand the problems facing the world, because they lacked access to the reasoning tools and intellectual leaders you had access to. This led to a deep sense of alienation from the rest of society. Team members ended up spending most of their time around other members and looking down on outsiders as “normies”.
This wasn’t my experience at all, personally. I did have some feeling of distance when I first started caring about AI risk in ~2008, but it didn’t get worse across CFAR. I also stayed in a lot of contact with folks outside the CFAR / EA / rationalist / AI risk spheres through almost all of it. I don’t think I looked down on outsiders.
There was a rarity narrative around being part of the only organization trying to “actually figure things out”, ignoring other organizations in the ecosystem working on AI safety and rationality and other communities with epistemic merit. CFAR/MIRI perpetuated the sense that there was nowhere worthwhile to go if you left the organization.
I thought CFAR and MIRI were part of a rare and important thing, but I did not think CFAR (nor CFAR + MIRI) was the only thing to matter. I do think there’s some truth in the “rarity narrative” claim, at CFAR, mostly via me and to a much smaller extent some others at CFAR having some of this view of MIRI.
There was a rarity narrative around the sharpness of Anna’s critical thinking skills, which made it so that if Anna knew everything you knew about a concern and disagreed with you, there was a lot of social pressure to defer to her judgment.
I agree that this happened and that it was a problem. I didn’t consciously intend to set this up, but my guess is that I did a bunch of things to cause it anyhow. In particular, there’s a certain way I used to sort of take the ground out from under people when we talked, that I think contributed to this. (I used to often do something like: stay cagey about my own opinions; listen carefully to how my interlocutor was modeling the world; show bits of evidence that refuted some of their assumptions; listen to their new model; repeat; … without showing my work. And then they would defer to me, instead of having stubborn opinions I didn’t know how to shift, which on some level was what I wanted.)
People at current-CFAR respect my views still, but it actually feels way healthier to me now. Partly because I’m letting my own views and their causes be more visible, which I think makes it easier to respond to. And because I somehow have less of a feeling of needing to control what other people think or do via changing their views.
(I haven’t checked the above much against others’ perceptions, so would be curious for anyone from current or past CFAR with a take.)
There was rampant use of narrative warfare (called “narrativemancy” within the organization) by leadership to cast aspersions and blame on employees and each other. There was frequent non-ironic use of magical and narrative schemas which involved comparing situations to fairy-tales or myths and then drawing conclusions about those situations with high confidence. The narrativemancer would operate by casting various members of the group into roles and then using the narrative arc of the story to make predictions about how the relationship dynamics of the people involved would play out. There were usually obvious controlling motives behind the narrative framings being employed, but the framings were hard to escape for most employees.
I believe this was your experience, mostly because I’m pretty sure I know who you are (sorry; I didn’t mean to know and won’t make it public) and I can think of at least one over-the-top (but sincere) conversation you could reasonably describe at least sort of this way (except for the “with high confidence”, I guess, and the “frequent”; and some other bits), plus some repeated conflicts.
I don’t think this was a common experience, or that it happened much at all (or at all at all?) in contexts not involving you, but it’s possible I’m being an idiot here somehow in which case someone should speak up.
Which I guess is to say that the above bullet point seems to me, from my experiences/observations, to be mostly or almost-entirely false, but that I think you’re describing your experiences and guesses about the place accurately and that I appreciate you speaking up.
[all the other bullet points]
I agree with parts and disagree with parts; but seemed mostly less interesting than the above.
—
Anyhow, thanks for writing, and I’m sorry you had bad experiences at CFAR, especially about the fairly substantial parts of the above bad parts that were my fault.
I expect my reply will accidentally make some true points you’re making harder to see (as well as hopefully adding light to some other parts), and I hope you’ll push back in those places.
Related to my reply to PhoenixFriend (in the parent comment), but hopping meta from it:
I have a question for whoever out there thinks they know how the etiquette of this kind of conversation should go. I had a first draft of my reply to PhoenixFriend, where I … basically tried to err on the side of being welcoming, looking for and affirming the elements of truth I could hear in what PhoenixFriend had written, and sort of emphasizing those elements more than my also-real disagreements. I ran it by a CFAR colleague at my colleague’s request, who said something like “look, I think your reply is pretty misleading; you should be louder and clearer about the ways your best guess about what happened differed from what’s described in PhoenixFriend’s comment. Especially since I and others at CFAR have our names on the organization too, so if you phrase things in ways that’ll cause strangers who’re skim-reading to guess that things at CFAR were worse than they were, you’ll inaccurately and unjustly mess with other peoples’ reputations too.” (Paraphrased.)
So then I went back and made my comments more disagreeable and full of details about where my and PhoenixFriend’s models differ. (Though probably still less than the amount that would’ve fully addressed my colleague’s complaints.)
This… seems better in that it addresses my colleague’s pretty reasonable desire, but worse in that it is not welcoming to someone who is trying to share info and is probably finding that hard. I am curious if anyone has good thoughts on how this sort of etiquette should go, if we want to have an illuminating, get-it-all-out-there, non-misleading conversation.
Part of why I’m worried, is it seems to me pretty easy for people who basically think the existing organizations are good, and also that mainstream workplaces are non-damaging and so on, to upvote/downvote each new datum based on those priors plus a (sane and sensible) desire to avoid hurting others’ feelings and reputations without due cause, etc., in ways that despite their reasonability may make it hard for real and needed conversations that are contrary to our current patterns of seeing to get started.
For example, I think PhoenixFriend indeed saw some real things at CFAR that many of those downvoting their comment did not see and mistakenly wouldn’t expect to see, but that also many of the details of PhoenixFriend’s comment are off, partly maybe because they were mis-generalizing from their experiences and partly because it’s hard to name things exactly (especially to people who have a bit of an incentive to mishear.)
(Also, to try briefly and poorly to spell out why I’m rooting for a “get it all out on the table” conversation, and not just a more limited “hear and acknowledge the mostly blatant/known harms, correct those where possible, and leave the rest of our reputation intact” conversation: basically, I think there’s a bunch of built-up “technical debt”, in the form of confusion and mistrust and trying-not-to-talk-about-particular-things-because-others-will-form-“unreasonable”-conflusions-if-we-do and who-knows-why-we-do-that-but-we-do-so-there’s-probably-a-reason, that I’m hoping gets cleared out by the long and IMO relatively high-quality and contentful conversation that’s been happening so far. I want more of that if we can get it. I want culture and groups to be able to build around here without building on top of technical debt. I also want information about how organizations do/don’t work well, and, in terms of means of acquiring this information, I much prefer bad-looking conversations on LW to wasting another five years doing it wrong.)
Personally I am not much trying to maintain the privacy of my own mind at this point,
This sounds like an extreme and surprising statement. I wrote out some clarifying questions like “what do you mean by privacy here”, but maybe it’d be better to just say:
I think it strikes me funny because it sounds sort of like a PR statement. And it sounds like a statement that could set up a sort of “iterations of the Matrix”-like effect. Where, you say “ok now I want to clear out all the miasma, for real”, and then you and your collaborators do a pretty good job at that; but also, something’s been lost or never gained, namely the logical common knowledge that there’s probably-ongoing, probably difficult to see dynamics that give rise to the miasma of {ungrounded shared narrative, information cascades, collective blindspots, deferrals, circular deferrals, misplaced/miscalibrated trust, etc. ??}. In other words, since these things happened in a context where you and your collaborators were already using reflection, introspection, reasoning, communication, etc., we learn that the ongoing accumulation of miasma is a more permanent state of affairs, and this should be common knowledge. Common knowledge would for example help with people being able to bring up information about these dynamics, and expect their information to be put to good use.
(I notice an analogy between iterations of the Matrix and economic boom-bust cycles.)
“get it all out on the table” conversation
“technical debt” [...] I’m hoping gets cleared out
These statements also seem to imply a framing that potentially has the (presumably unintentional) effect of subtly undermining the common knowledge of ongoing miasma-or-whatever. Like, it sort of directs attention to the content but not the generator, or something; like, one could go through all the “stuff” and then one would be done.
This sounds like an extreme and surprising statement.
Well, maybe I phrased it poorly; I don’t think what I’m doing is extreme; “much” is doing a bunch of work in my “I am not much trying to...” sentence.
I mean, there’s plenty I don’t want to share, like a normal person. I have confidential info of other peoples that I’m committed to not sharing, and plenty of my own stuff that I am private about for whatever reason. But in terms of rough structural properties of my mind, or most of my beliefs, I’m not much trying for privacy. Like when I imagine being in a context where a bunch of circling is happening or something (circling allows silence/ignoring questions/etc..; still, people sometimes complain that facial expressions leak through and they don’t know how to avoid it), I’m not personally like “I need my privacy though.” And I’ve updated some toward sharing more compared to what I used to do.
Ok, thanks for clarifying. (To reiterate my later point, since it sounds like you’re considering the “narrative pyramid schemes” hypothesis: I think there is not common knowledge that narrative pyramid schemes happen, and that common knowledge might help people continuously and across contexts share more information, especially information that is pulling against the pyramid schemes, by giving them more of a true expectation that they’ll be heard by a something-maximizing person rather than a narrative-executer).
I have concrete thoughts about the specific etiquette of such conversations (they’re not off the cuff; I’ve been thinking more-or-less continuously about this sort of thing for about eight years now).
However, I’m going to hold off for a bit because:
a) Like Anna, I was a part of the dynamics surrounding PhoenixFriend’s experience, and so I don’t want to seize the reins
b) I’ve also had a hard time coordinating with Anna on conversational norms and practices, both while at CFAR and recently
… so I sort of want to not-pretend-I-don’t-have-models-and-opinions-here (I do) but also do something like “wait several days and let other people propose things first” or “wait until directly asked, having made it clear that I have thoughts if people want them” or something.
Goal-Factoring was first called “use fungibility”, a technique I taught within a class called “microeconomics 1” at the CFAR 2012 minicamps prior to Geoff doing any teaching.
As a participant of Rationality Minicamp in 2012, I confirm this. Actually, found the old textbook, look here!
Okay, so, that old textbook does not look like a picture of goal-factoring, at least not on that page. But I typed “goal-factoring” into my google drive and got up these old notes that used the word while designing classes for the 2012 minicamps. A rabbithole, but one I enjoyed so maybe others will.
I worked for CFAR full-time from 2014 until mid-to-late 2016 and have continued working as a part-time employee or frequent contractor since. I’m sorry this was your experience. That said, it really does not mesh that much with what I’ve experienced and some of it is almost the opposite of the impressions that I got. Some brief examples:
My experience was that CFAR if anything should have used its techniques internally muchmore. Double crux for instance felt like it should have been used internally far more than it actually was—one thing that vexed me about CFAR was a sense that there were persistent unresolved major strategic disagreements between staff members that the organization did not seem to prioritize resolving, where I think double crux would have helped.
(I’m not talking about personal disagreements but rather things like “should X set of classes be in the workshop or not?”)
Similarly, goal factoring didn’t see much internal use (I again think it should have been used more!) and Leverage-style “charting” strikes me as really a very different thing from the way CFAR used this sort of stuff.
There was generally little internal “debugging” at all, which contrary to the previous two cases I think is mostly correct—the environment of having your colleagues “debug” you seems pretty weird and questionable. I do think there was at least some of this, but I don’t think it was pervasive or mandatory in the organization and I mostly avoided it.
Far from spending all my time with team members outside of work, I think I spent most of my leisure and social time with people from other groups, many outside the rationalist community. To some degree I (and I think some others) would have liked for the staff to be tighter-knit, but that wasn’t really the culture. Most CFAR staff members did not necessarily know much about my personal life and I did not know much about theirs.
I do not much venerate the founding team or consider them to be ultimate masters or whatever. There was a period early on when I was first working there where I sort of assumed everyone was more advanced than they actually were, but this faded with time. I think what you might consider “lionizing parables” I might consider “examples of people using the techniques in their own lives”. Here is a sample example of this type I’ve given many times at workshops as part of the TAPs class, the reader can decide whether it is a “lionizing parable” or not (note: exact wording may vary):
It can be useful to practice TAPs by actually physically practicing! I believe <a previous instructor’s name> once wanted to set up a TAP involving something they wanted to do after getting out of bed in the morning, so they actually turned off all the lights in their room, got into bed as if they were sleeping, set an alarm to go off as if it were the morning, then waited in bed for the alarm to go off, got up, did the action they were practicing… and then set the whole thing up again and repeated!
I’m very confused by what you deem “narrativemancy” here. I have encountered the term before but I don’t think it was intentionally taught as a CFAR technique or used internally as an explicit technique. IIRC the term also had at least somewhat negative valence.
I should clarify that I have been less involved in “day-to-day” CFAR stuff since mid-late 2016, though I have been at I believe a large majority of mainline workshops (I think I’m one of the most active instructors). It’s possible that the things you describe were occurring but in ways that I didn’t see. That said, they really don’t match with my picture of what working at CFAR was like.
I’ve worked at CFAR for most of the last 5 years, and this comment strikes me as so wildly incorrect and misleading that I have trouble believing it was in fact written by a current CFAR employee. Would you be willing to verify your identity with some mutually-trusted 3rd party, who can confirm your report here? Ben Pace has offered to do this for people in the past.
It sounds like they meant they used to work at CFAR, not that they currently do.
Also given the very small number of people who work at CFAR currently, it would be very hard for this person to retain anonymity with that qualifier so…
I think it’s safe to assume they were a past employee… but they should probably update their comment to make that clearer because I was also perplexed by their specific phrasing.
It sounds like they meant they used to work at CFAR, not that they currently do.
The interpretation of “I’m a CFAR employee commenting anonymously to avoid retribution” as “I’m not a CFAR employee, but used to be one” seems to me to be sufficiently strained and non-obvious that we should infer from the commenter’s choice not to use clearer language that they should be treated as having deliberately intended for readers to believe that they’re a current CFAR employee.
I like the local discourse norm of erring on the side of assuming good faith, but like steven0461, in this case I have trouble believing this was misleading by accident. Given how obviously false, or at least seriously misleading, many of these claims are (as I think accurately described by Anna/Duncan/Eli), my lead hypothesis is that this post was written by a former staff member, who was posing as a current staff member to make the critique seem more damning/informed, who had some ax to grind and was willing to engage in deception to get it ground, or something like that...?
FYI I just interpreted it to mean “former staff member” automatically. (This is biased by my belief that CFAR has very few current staff members so of course it was highly unlikely to be one, but I don’t think it was an unreasonably weird reading)
Relatedly, the organization uses a technique called goal factoring during debugging which was in large part inspired by Geoff Anders’ Connection Theory and was actually taught by Geoff at CFAR workshops at some point. This means that CFAR debugging in many ways resembles Leverage’s debugging and the similarity in naming isn’t just a coincidence of terms.
While it’s true that there’s some structural similarity between Goal Factoring and Connection Theory, and Geoff did teach Goal Factoring at some workshops (including one I attended), these techniques are more different than they are similar. In particular, goal factoring is taught as a solo technique for introspecting on what you want in a specific area, while Connection Theory is a therapy-like technique in which a facilitator tries to comprehensively catalog someone’s values across multiple sessions going 10+ hours.
I don’t have an object-level opinion formed on this yet, but want to +1 this as more of the kind of description I find interesting, and isn’t subject to the same critiques I had with the original post.
I’m interested in figuring out more what’s going on here—how do you feel about emailing me, hashing out the privacy issues, and, if we can get them hashed out, you telling me the four people you’re thinking of who had psychotic episodes?
Update: I interviewed many of the people involved and feel like I understand the situation better.
My main conclusion is that I was wrong about Michael making people psychotic. Everyone I talked to had some other risk factor, like a preexisting family or personal history, or took recreational drugs at doses that would explain their psychotic episodes.
Michael has a tendency to befriend people with high trait psychoticism and heavy drug use, and often has strong opinions on their treatment, which explains why he is often very close to people and very noticeable at the moment they become psychotic. But aside from one case where he recommended someone take a drug that made a bad situation slightly worse, and the general Berkeley rationalist scene that he (and I and everyone else here) is a part of having lots of crazy ideas that are psychologically stressful, I no longer think he is a major cause.
While interviewing the people involved, I did get some additional reasons to worry that he uses cult-y high-pressure recruitment tactics on people he wants things from, in ways that make me continue to be nervous about the effect he *could* have on people. But the original claim I made that I knew of specific cases of psychosis which he substantially helped precipitate turned out to be wrong, and I apologize to him and to Jessica. Jessica’s later post https://www.lesswrong.com/posts/pQGFeKvjydztpgnsY/occupational-infohazards explained in more detail what happened to her, including the role of MIRI and of Michael and his friends, and everything she said there matches what I found too. Insofar as anything I wrote above produces impressions that differs from her explanation, assume that she is right and I am wrong.
Since the interviews involve a lot of private people’s private details, I won’t be posting anything more substantial than this publicly without a lot of thoughts and discussion. If for some reason this is important to you, let me know and I can send you a more detailed summary of my thoughts.
I’m deliberately leaving this comment in this obscure place for now while I talk to Michael and Jessica about whether they would prefer a more public apology that also brings all of this back to people’s attention again.
I want to summarize what’s happened from the point of view of a long time MIRI donor and supporter:
My primary takeaway of the original post was that MIRI/CFAR had cultish social dynamics, that this lead to the spread of short term AI timelines in excess of the evidence, and that voices such as Vassar’s were marginalized (because listening to other arguments would cause them to “downvote Eliezer in his head”). The actual important parts of this whole story are a) the rationalistic health of these organizations, b) the (possibly improper) memetic spread of the short timelines narrative.
It has been months since the OP, but my recollection is that Jessica posted this memoir, got a ton of upvotes, then you posted your comment claiming that being around Vassar induced psychosis, the karma on Jessica’s post dropped in half while your comment that Vassar had magical psychosis inducing powers is currently sitting at almost five and a half times the karma of the OP. At this point, things became mostly derailed into psychodrama about Vassar, drugs, whether transgender people have higher rates of psychosis, et cetera, instead of discussion about the health of these organizations and how short AI timelines came to be the dominant assumption in this community.
I do not actually care about the Vassar matter per say. I think you should try to make amends with him and Jessica, and I trust that you will attempt to do so. But all the personal drama is inconsequential next to the question of whether MIRI and CFAR have good epistemics and how the short timelines meme became widely believed. I would ask that any amends you try to make also address that your comment also derailed these very vital discussions.
Thanks so much for talking to the folks involved and writing this note on your conclusions, I really appreciate that someone did this (who I trust to actually try to find out what happened and report their conclusions accurately).
My main conclusion is that I was wrong about Michael making people psychotic.
...
Michael has a tendency to befriend people with high trait psychoticism and heavy drug use, and often has strong opinions on their treatment, which explains why he is often very close to people and very noticeable at the moment they become psychotic.
This does not contradict “Michael making people psychotic”. A bad therapist is not excused by the fact that his patients were already sick when they came to him.
Disclaimer: I do not know any of the people involved and have had no personal dealings with any of them.
I’ve seen the term used a few times on LW. Despite the denotational usefulness, it’s very hard to keep it from connotationally being a slur, not without something like there being an existing slur and the new term getting defined to be its denotational non-slur counterpart (how it actually sounds also doesn’t help).
So it’s a good principle to not give it power by using it (at least in public).
Psychedelic use was common among the leadership of CFAR and spread through imitation, if not actual institutional encouragement, to the rank-and-file. This makes it highly distressing that Michael is being singled out for his drug advocacy by people defending CFAR.
I remember someone who lived in Berkeley in 2016-2017, who wasn’t a CFAR employee but was definitely talking extensively with CFAR people (collaborating on rationality techniques/instruction?) and had gone to a CFAR workshop, telling me something along the lines of “CFAR can’t legally recommend that people try LSD, but...”; I don’t remember what followed the “but”, I don’t think the specific wording was even intended to be remembered (to preserve plausible deniability?), but it gave me the impression that CFAR people may have recommended it if it were legal to do so, as implied by the “but”. This was before I was talking with Michael Vassar extensively. This is some amount of Bayesian evidence for the above.
It’s true some CFAR staff have used psychedelics, and I’m sure they’ve sometimes mentioned that in private conversation. But CFAR as an institution never advocated psychedelic use, and that wasn’t just because it was illegal, it was because (and our mentorship and instructor trainings emphasize this) psychedelics often harm people.
I’d be interested in hearing from someone who was around CFAR in the first few years to double check that the same norm was in place. I wasn’t around before 2015.
What does “significant involvement” mean here? I worked for CFAR full-time during that period and to the best of my knowledge you did not work there—I believe for some of that time you were dating someone who worked there, is that what you mean by significant involvement?
I remember being a “guest instructor” at one workshop, and talking about curriculum design with Anna and Kenzi. I was also at a lot of official and unofficial CFAR retreats/workshops/etc. I don’t think I participated in much of the normal/official CFAR process, though I did attend the “train the trainers workshop”, and in this range of contexts saw some of how decisions were made, how workshops were run, how people related to each other at parties.
As I recall it, what I observed first-hand and was told second-hand at the time confirms bullets 2, 4, and 6 of the top-level comment. Many of the others are about how people felt, and are consistent with what people I knew reported at the time. Nothing in the top-level comment seems dissonant with what I observed.
It seems like there was a lot of fragmentation (which is why we mostly didn’t interact). I felt bad about exercising (a small amount of) unaccountable influence at the time through these mechanisms, but I was confused about so much relative to the rate at which I was willing to ask questions that I didn’t end up asking about the info-siloing. In hindsight it seems intended to keep the true nature of governance obscure and therefore unaccountable. I did see or at least hear reports of Anna pretending to give different people authority over things and then intervening if they weren’t doing the thing she expected, which is consistent with that hypothesis.
I’m afraid I don’t remember a lot of details beyond this, I had a lot going on that year aside from CFAR.
My comment initially said 2014-2016 but IIRC my involvement was much less after 2015 so I edited it.
Unfortunately I think the working relationship between Anna and Kenzi was exceptionally bad in some ways and I would definitely believe that someone who mostly observed that would assume the organization had some of these problems; however I think this was also a relatively unique situation within the organization.
(I suspect though am not certain that both Anna and Kenzi would affirm that indeed this was an especially bad dynamic.)
With respect to point 2, I do not believe there was major peer pressure at CFAR to use psychadelics and I have never used psychadelics myself. It’s possible that there was major peer pressure on other people or it applied to me but I was oblivious to it or whatever but I’d be surprised.
Psychadelic use was also one of a few things that were heavily discouraged (or maybe banned?) as conversation topics for staff at workshops—like polyphasic sleep (another heavily discouraged topic), psychadelics were I believe viewed as potentially destabilizing and inappropriate to recommend to participants, plus there are legal issues involved. I personally consider recreational use of psychadelics to be immoral as well.
My comment initially said 2014-2016 but IIRC my involvement was much less after 2015 so I edited it.
Thanks for the clarification, I’ve edited mine too.
What do you see as the main sorts of interventions CFAR was organized around? I feel like this is a “different worlds” thing where I ought to be pretty curious what the whole scene looked like to you, what it seemed like people were up to, what the important activities were, & where progress was being made (or attempted).
I think that CFAR, at least while I was there full-time from 2014 to sometime in 2016, was heavily focused on running workshops or other programs (like the alumni reunions or the MIRI Summer Fellows program). See for instance my comment here.
Most of what the organization was doing seemed to involve planning and executing workshops or other programs and teaching the existing curriculum. There were some developments and advancements to the curriculum, but they often came from the workshops or something around them (like followups) rather than a systematic development project. For example, Kenzi once took on the lion’s share of workshop followups for a time, which led to her coming up with new curriculum based on her sense of what the followup participants were missing even after having attended the workshop.
(In the time before I joined there had been significantly more testing of curriculum etc. outside of workshops, but this seemed to have become less the thing by the time I was there.)
A lot of CFAR’s internal focus was on improving operations capacity. There was at one time a narrative that the staff was currently unable to do some of the longer-term development because too much time was spent on last minute scrambles to execute programs, but once operations sufficiently improved, we’d have much more open time to allocate to longer-term development.
I was skeptical of this and I think ultimately vindicated—CFAR made major improvements to its operations, but this did not lead to systematic research and development emerging, though it did allow for running more programs and doing so more smoothly.
[Deleted]
Trying to do a cooperative, substantive reply. Seems like openness and straightforwardness are the best way here.
I found the above to be a mix of surprising and believable. I was at CFAR full-time from Oct 2015 to Oct 2018, and in charge of the mainline workshops specifically for about the last two of those three years.
This surprises me. I don’t know what the bar for “worked in some capacity with the CFAR/MIRI team” is. For instance, while at CFAR, I had very little attention on the comings-and-goings at MIRI, a much larger organization, and also CFAR had a habit of using five or ten volunteers at a time for workshops, month in and month out. So this could be intended to convey something like “out of the 500 people closest to both orgs.” If it’s meant to imply “four people who would have worked for more than 20 hours directly with Duncan during his three years at CFAR,” then I am completely at a loss; I can’t think of any such person who I am aware had a psychotic break.
This also surprises me. I do not recall ever either directly encountering or hearing open discussions of psychedelic use while at CFAR. It was mentioned nonzero times in the abstract, as were any of dozens of other things (CFAR’s colloquia wandered far and wide). But while I can think of a time when a CFAR staff member spoke quietly and reservedly while not at work about an experience with psychedelics, I was not in touch with any such common institutional casualness, or “this is cool and people should do it” vibe, between 10⁄15 and 10⁄18. I am not sure if this means it happened at a different time, or happened out of my sight, or what; I’m just reporting that I myself did not pick up on the described vibe at all. In fact, I can think of several times that psychedelic use was mentioned by participants or volunteers at workshops, and was immediately discouraged by staff members along the lines of “look, that’s the sort of thing people might have personal experiences with, but it’s very much not at all in line with what we’re trying to do or convey here.”
This … did not surprise me. It is more extreme than I would have described and more extreme than I experienced or believe I participated in/perpetuated, but it is not to the point where I feel a “pshhhh, come on.” I will state for the record that I recall very very few debugging sessions between me and any less-senior staff member in my three years (<5), and absolutely none where I was the one pushing for debugging to happen (as opposed to someone like Eli Tyre (who I believe would not mind being named) asking for help working through something or other).
This one misses the mark entirely, as far as I can see. Goal factoring, at least in the 2015-2018 window, bears
no resemblance whatsoeverEDIT: little resemblance to things like Connection Theory or Charting. It’s a pretty straightforward process of “think about what you want, think about its individual properties, and do a brainstorming session on how you might get each individual property on its own before returning to the birds’-eye view and making a new plan.” There’s nothing psych-oriented about it except in the very general sense of “what kinds of good things were you hoping to get, when you applied to med school?”This one feels within the realm of the believable. The poster describes a more blatant adversarial atmosphere than I experienced, but I did sometimes have the feeling, myself, that people would double crux when that was useful to their goals and not when it wasn’t, and I can well imagine someone else having a worse experience than I did. I had some frustrating arguments in which it took more than an hour to establish the relevance of e.g. someone having agreed in writing to show up to a thing and then not doing so. However, in my own personal experience, this didn’t seem any worse than what most non-Hufflepuff humans do most of the time; it was more “depressingly failing to be better than normal” than “notably bad.” If someone had asked me to make a list of the top ten things I did not like at CFAR, or thought were toxic, this would not have made the list from my own personal point of view.
This is close to my experience. Notably, there was a moment in CFAR’s history when it felt like the staff had developed a deep and justified rapport, and was able to safely have conversations on extremely tricky and intimate topics. Then a number of new hires were just—dropped in, sans orientation, and there was an explicit expectation that I/we go on being just as vulnerable and trusting as we had been the day before. I boycotted those circles for several months before tolerance-for-boycott ran out and I was told I had to start coming again because it was a part of the job. I disagree with “The whole point of circling is to create a state of emotional vulnerability and openness in the person who is being circled,” but I don’t disagree that this is often the effect, and I don’t disagree with “This often required rank-and-file members to be emotionally vulnerable to the leadership who perhaps didn’t actually have their best interests at heart.”
This also has the ring of truth, though I’m actually somewhat confused by the rank-and-file comment. Without trying to pin down or out this person, there were various periods at CFAR in which the organization was more (or less) flat and egalitarian, so there were many times (including much of my own time there) when it wouldn’t make sense to say that “rank-and-file employees” was a category that existed. However, if I think about the times when egalitarianism was at its lowest, and people had the widest diversity of power and responsibility, those times did roughly correspond with high degrees of circling and one-on-one potentially head-melty conversations.
This bullet did not resonate with me at all, but I want to be clear that that’s not me saying “no way.” Just that I did not experience this, and do not recall hearing this complaint, and do not recall participating in the kind of close debugging that I would expect to create this feeling. I had my own complaints about work/life boundaries, but for me personally they didn’t lie in “I can’t get away from the circles and the debugging.” (I reiterate that there wasn’t much debugging at all in my own experience, and all of that solicited by people wanting specific, limited help with specific, limited problems (as opposed to people virtuously producing desire-to-be-debugged in response to perceived incentives to do so, as claimed in some of the Leverage stuff).)
This zero percent matches my experience, enough that I consider this the strongest piece of evidence that this person and I did not overlap, or had significant disoverlap. The other alternative being that I just swam in a different subcultural stream. But my relationships with friends and family utterly disconnected from the Bay Area and the EA movement and the rationalist community only broadened and strengthened during my time at CFAR.
Comments like this make me go “ick” at the conflation between CFAR and MIRI, which are extremely different institutions with extremely different internal cultures (I have worked at each). But if I limit this comment to just my experience at CFAR—yes, this existed, and bothered me, and I can recall several instances of frustratedly trying to push back on exactly this sort of mentality. e.g. I had a disagreement with a staff member who claimed that the Bay Area rationalist community had some surprising-to-me percentage of the world’s agentic power (it might have been 1%, it might have been 10%; either way, it struck me as way too high). That being said, that staff member and I had a cordial and relatively productive disagreement. It’s possible that I was placed highly enough in the hierarchy that I wasn’t subject to the kind of pressure that this person’s account seems to imply.
I did not have this experience. I did, however, have the experience of something like “if Anna thinks your new idea for a class (or whatever) is interesting, it will somehow flourish and there will be lots of discussion, and if Anna thinks it’s boring or trivial, then you’ll be perfectly able to carry on tinkering with it by yourself, or if you can convince anyone else that it’s interesting.” I felt personally grumpy about the different amount of water I felt different ideas got; some I thought were unpromising got much more excitement than some I thought were really important.
However, in my own personal experience/my personal story, this is neither a) Anna’s fault, nor b) anything other than business as usual? Like, I did not experience, at all, any attempt from Anna to cultivate some kind of mystique, or to try to swing other people around behind her. Quite the contrary—I multiple times saw Anna try pretty damn hard to get people to unanchor from her own impressions or reactions, and I certainly don’t blame her for being honest about what she found promising, even where I disagreed. My sense was that the stuff I was grumpy about was just the result of individuals freely deferring to Anna’s judgment, or just the way that vibes and enthusiasm spread in monkey social groups. I never felt like, for instance, Anna (or anyone on Anna’s behalf) was trying to suffocate one of my ideas. It just felt like my ideas had a steeper hill in front of them, due to no individual’s conscious choices. Moloch, not malice.
Did not experience. Do not rule out, but did not experience. Can neither confirm nor deny.
Yes. This bothered me no end, and I both sparked and joined several attempts to get new curriculum development initiatives off the ground. None of these were particularly successful, and I consider it really really bad that no substantially new CFAR content was published in my last year (or, to the best of my knowledge, in the three years since). However, to be clear, I also did not experience any institutional resistance to the idea of new development. It just simply wasn’t prioritized on a mission level and therefore didn’t cohere.
This reads as outright false to me, like the kind of story you’d read about in a clickbait tabloid that overheard enough words to fabricate something but didn’t actually speak to anyone on the ground.
The closest I can think of to what might have sparked the above description is Val’s theorizing on narrativemancy and the social web? But this mainly played out in scattered colloquium talks that left me, at least, mostly nonplussed. To the extent that there was occasional push toward non-ironic use of magical schemas, I explicitly and vigorously pushed back (I had deep misgivings about even tiny, nascent signs of woo within the org). But I saw nothing that resembles “people acting as narrativemancers” or telling stories based on clichés or genre tropes. I definitely never told such stories myself, and I never heard one told about me or around me.
That being said, the same caveats apply: this could have been at a different time, or in a different subculture within the org, or something I just missed. I am not saying “this anecdote is impossible.” I’m just saying ????
I will say this, though: to the extent that the above description is accurate, that’s deeply fucked. Like, I want to agree wholeheartedly with the poster’s distaste for the described situation, separate from my ability to evaluate whether it took place. That’s exactly the sort of thing you go to a “center for applied rationality” to escape, in my book.
I do not recognize the vibe of this anecdote, either (can’t think of “offenses” committed or people sitting in judgment; sometimes people didn’t show up on time for meetings? Or there would be personal disagreements between e.g. romantic exes?). However, I will note that CFAR absolutely blurred the line between formal workshop settings, after-workshop parties, and various tiers of alumni events that became more or less intimate depending on who was invited. While I didn’t witness any “I can’t tell what rules apply; am I at work or not?” confusion, it does seem to me that CFAR in particular would be 10x more likely to create that confusion in someone than your standard startup. So: credible?
Again, confusing and not at all in synch with my personal experience. But again: plausible/credible, especially if you add in the fact that I had a relatively secure role and am relatively socially oblivious. I do not find it hard to imagine being a more junior staff member and feeling the anxiety and insecurity described.
I don’t know. I can’t tell how helpful any of my commentary here is. I will state that while CFAR and I have both tried to be relatively polite and hands-off with each other since parting ways, no one ever tried to get me to sign an NDA, or implied that I couldn’t or shouldn’t speak freely about my experiences or opinions. I’ve been operating under the more standard-in-our-society just-don’t-badmouth-your-former-workplace-and-they-won’t-badmouth-you peace treaty, which seems good for all sorts of reasons and didn’t seem unusually strong for CFAR in particular.
Which is to say: I believe myself to be free to speak freely, and I believe myself to be being candid here. I am certainly holding many thoughts and opinions in reserve, but I’m doing so by personal choice and golden-rule policy, and not because of a sense that Bad Things Would (immediately, directly) Happen If I Didn’t.
Shrug emoji?
As a general dynamic, no idea if it was happening here but just to have as a hypothesis, sometimes people selectively follow rules of behavior around people that they expect will seriously disapprove of the behavior. This can be well-intentioned, e.g. simply coming from not wanting to harm people by doing things around them that they don’t like, but could have the unfortunate effect of producing selected reporting: you don’t complain about something if you’re fine with it or if you don’t see it, so the only reports we get are from people who changed their mind (or have some reason to complain about something they don’t actually think is bad). (Also flagging that this is a sort of paranoid hypothesis; IDK how the world is on this dimension, but the Litany of Gendlin seems appropriate. Also it’s by nature harder to test, and therefore prone to the problems that untestable hypotheses have.)
This literally happened with Brent; my current model is that I was (EDIT: quite possibly unconsciously/reflexively/non-deliberately) cultivated as a shield by Brent, in that he much-more-consistently-than-one-would-expect-by-random-chance happened to never grossly misbehave in my sight, and other people, assuming I knew lots of things I didn’t, never just told me about gross misbehaviors that they had witnessed firsthand.
Damn.
The two stories here fit consistently in a world where Duncan feels less social pressure than others including Phoenix, so that Duncan observes people seeming to act freely but Molochianly, and they experience network-effect social pressure (which looks Molochian, but is maybe best thought of as a separate sort of thing).
I worked for CFAR from 2016 to 2020, and am still somewhat involved.
This description does not reflect my personal experience at all.
And speaking from my view of the organization more generally (not just my direct personal experience): Several bullet points seem flatly false to me. Many of the bullet points have some grain of truth to them, in the sense that they refer to or touch on real things that happened at the org, but then depart wildly from my understanding of events, or (according to me) mischaracterize / distort things severely.
I could go through and respond in more detail, point by point, if that is really necessary, but I would prefer not to do that, since it seems like a lot of exhausting work.
As a sort of free sample / downpayment:
I don’t know who this is referring to. To my knowledge 0 people who are or have been staff at CFAR had a psychotic episode either during or after working at CFAR.
First of all, I think the use of “rank-and-file” throughout the use of this comment is misleading to the point of being dishonest. CFAR has always been a small organization of no more than 10 or 11 people, often flexibly doing multiple roles. The explicit organizational structure involved people having different “hierarchical” relationships depending on context.
In general, different people lead different projects, and the rest of the staff would take “subordinate” roles, in those projects. That is, if Elizabeth is leading a workshop, she would delegate specific responsibilities to me as one of her workshop staff. But in a different context, where I’m leading a project, I might delegate to her, and I might have the final say. (At one point this was an official, structural, policy, with a hierarchy of reporting mapped out on a spreadsheet, but for most of the time I’ve been there it has been much more organic than that.)
But these hierarchical arrangements are both transient and do not at all dominate the experience of working for CFAR. Mostly we are and have been a group of pretty independent contributors, with different views about x-risk and rationality and what-CFAR-is-about, who collaborate on specific workshops and (in a somewhat more diffuse way) in maintaining the organization. There is not anything like the hierarchy you typically see in larger organizations, which makes the frequent use of the term “rank and file” seem out of place and disingenuous, to me.
Certainly, Anna was always in a leadership role, in the sense that the staff respected her greatly, and were often willing to defer to her, and at most times there was an Executive Director (ED) in addition to Anna.
That said, I don’t think that Anna, or either of the EDs ever confided to me that they had ever taken psychedelics, even in private. I certainly didn’t feel pressured to do psychedelics, and I don’t see how that practice could have spread by imitation, given that it was never discussed, much less modeled. And there was not anything like “institutional encouragement”.
The only conversations I remember having about psychedelic drugs are the conversations in which we were told that it was one of the topics that we were not to discuss with workshop participants, and a conversation in which Anna strongly stated that psychedelics were destabilizing and implied that they were...generally bad, or at least that being reckless with them was really bad.
Personally, I have never taken any psychoactive drugs aside from nicotine (and some experimentation with caffeine and modafinil, once). This stance was generally respected by CFAR staff. Occasionally, some people (not Anna or either ED) expressed curiosity about or gently ribbed me about my hard-line stance of not drinking alcohol, but in a way that was friendly and respectful of my boundaries. My impression is that Anna more-or-less approves of stance on drugs, without endorsing it as the only or obvious stance.
This is false, or at minimum is overly general, in that it does not resemble my experience at all.
My experience:
I could and can easily avoid debugging sessions with Anna. Every interaction that I’ve had with her has been consensual, and she has, to my memory, always respected my boundaries, when I had had enough, or was too tired, or the topic was too sensitive, or whatever. In general, if I say that I don’t want to talk about something, people at CFAR respect that. They might offer care, or help, for if I decided I wanted it, but then they would leave me alone. (Most of the debugging, etc., conversations that I had at CFAR, I explicitly sought out.)
This also didn’t happen that frequently. While I’ve had lots of conversations with Anna, I estimate I’ve had deep “soulful” conversations, or conversations in which she was explicitly teaching me a mental technique...around once every 4 months, on average?
Also, though it has happened somewhat more rarely, I have participated in debugging style conversations with Anna where I was in the “debugger” role.
(By the way, is in CFAR’s context, the “debugger” role is explicitly a role of assistance / midwifery, of helping a person get traction and understanding on some problem, rather than an active role of doing something to or intervening on the person being debugged.
Though I admit that this can still be a role with a lot of power and influence, especially in cases where there is an existing power or status differential. I do think that early in my experience with CFAR, I was to willing to defer to Anna about stuff in general, and might make big changes in my personal direction at her suggestion, despite not really having and inside view of why I should prefer that direction. She and I would both agree, today, that this is bad, though I don’t consider myself to have been majorly harmed by it. I also think it is not that unusual. Young people are often quite influenced by role models that they are impressed by, often without clear-to-them reasons.)
I have never heard the phrase “engine of desperation” before today, though it is true that there was a period in which Anna was interested in a kind of “quiet desperation” that she thought was a effective place to think and act from.
I am aware of some cases of Anna debugging with CFAR staff that seem somewhat more fraught than my own situation, but from what I know of those, they are badly characterized by the above bullet point.
I could go on, and I will if that’s helpful. I think my reaction to these first few bullet points is a broadly representative sample.
I endorse Eli’s commentary.
Thank you for adding your detailed take/observations.
My own take on some of the details of CFAR that’re discussed in your comment:
I think there were serious problems here, though our estimates of the frequencies might differ. To describe the overall situation in detail:
I often got debugging help from other members of CFAR, but, as noted in the quote, it was voluntary. I picked when and about what and did not feel pressure to do so.
I can think of at least three people at CFAR who had a lot of debugging sort of forced on them (visibly expected as part of their job set-up or of check-in meetings or similar; they didn’t make clear complaints but that is still “sort of forced”), in ways that were large and that seem to me clearly not okay in hindsight. I think lots of other people mostly did not experience this. There are a fair number of people about whom I am not sure or would make an in-between guess. To be clear, I think this was bad (predictably harmful, in ways I didn’t quite get at the time but that e.g. standard ethical guidelines in therapy have long known about), and I regret it and intend to avoid “people doing extensive debugging of those they have direct power over” contexts going forward.
I believe this sort of problem was more present in the early years, and less true as CFAR became older, better structured, somewhat “more professional”, and less centered around me. In particular, I think Pete’s becoming ED helped quite a bit. I also think the current regime (“holocracy”) has basically none of this, and is structured so as to predictably have basically none of this—predictably, since there’s not much in the way of power imbalances now.
It’s plausible I’m wrong about how much of this happened, and how bad it was, in different eras. In particular, it is easy for those in power (e.g., me) to underestimate aspects of how bad it is not to have power; and I did not do much to try to work around the natural blindspot. If anyone wants to undertake a survey of CFAR’s past and present staff on this point (ideally someone folks know and can accurately trust to maintain their anonymity while aggregating their data, say, and then posting the results to LW), I’d be glad to get email addresses for CFAR’s past and present staff for the purpose.
I’m sure I did not describe my process as “implanting an engine of desperation”; I don’t remember that and it doesn’t seem like a way I would choose to describe what I was doing. “Implanting” especially doesn’t. As Eli notes (this hadn’t occurred to me, but might be what you’re thinking of?), I did talk some about trying to get in touch with one’s “quiet desperation”, and referenced Pink Floyd’s song “Time” and “the mass of men lead lives of quiet desperation” and developed concepts around that; but this was about accessing a thing that was already there, not “implanting” a thing. I also led many people in “internal double cruxes around existential risk”, which often caused fairly big reactions as people viscerally noticed “we might all die.”
I disagree with this point overall. Goal-Factoring was first called “use fungibility”, a technique I taught within a class called “microeconomics 1” at the CFAR 2012 minicamps prior to Geoff doing any teaching. It was also discussed at times in some form at the old SingInst visiting fellows program, IIRC.
Geoff developed it, and taught it at many CFAR workshops in early years (2013-2014, I think). The choice that it was Goal-Factoring that Geoff (was asked to teach? wanted to teach? I don’t actually remember; probably both?) was I think partly to do with its resemblance to the beginning/repeated basic move in Connection Theory.
My guess is that there were asymmetries like this, and that they were important, and that they were not worse than most organizations (though that’s really not the right benchmark). Insofar as you have experience at other organizations (e.g. mainstream tech companies or whatnot), or have friends with such experience who you can ask questions of, I am curious how you think they compare.
On my own list of “things I would do really differently if I was back in 2012 starting CFAR again”, the top-ranked item is probably:
Share information widely among staff, rather than (mostly unconsciously/not-that-endorsedly) using lack-of-information-sharing to try to control people and outcomes.
Do consider myself to have some duty to explain decisions and reply to questions. Not “before acting”, because the show must go on and attempts to reach consensus would be endless. And not “with others as an authority that can prevent me from acting if they don’t agree.” But yes with a sincere attempt to communicate my actual beliefs and causes of actions, and to hear others’ replies, insofar as time permits.
I don’t think I did worse than typical organizations in the wider world, on the above points.
I’m honestly uncertain how much this is/isn’t related to the quoted complaint.
Duncan’s reply here is probably more accurate to the actual situation at CFAR than mine would be. (I wrote much of the previous paragraphs before seeing his, but endorsing Duncan’s on this here seems best.) If Pete wants to weigh in I would also take his perspective quite seriously here. I don’t quite remember some of the details.
As Duncan noted, “creating a state of emotional vulnerability and openness” is really not supposed to be the point of circling, but it is a thing that happens pretty often and that a person might not know how to avoid.
The point of circling IMO is to break all the fourth walls that conversations often skirt around, let the subtext or manner in which the conversation is being done be made explicit text, and let it all thereby be looked at together.
A different thing that I in hindsight think was an error (that I already had on my explicit list of “things to do differently going forward”, and had mentioned in this light to a few people) was using circling in the way we did at AIRCS workshops, where some folks were there to try to get jobs. My current view, as mentioned a bit above, is that something pretty powerfully bad sometimes happens when a person accesses bits of their insides (in the way that e.g. therapy or some self-help techniques lead people to) while also believing they need to please an external party who is looking at them and has power over them.
(My guess is that well-facilitated circling is fine at AIRCS-like programs that are less directly recruiting-oriented. Also that circling at AIRCS had huge upsides. This is a can of worms I don’t plan to go into right now, in the middle of this comment reply, but flagging it to make my above paragraph not overgenralized-from.)
I believe this was your experience, and am sorry. My non-confident guess is that some others experienced this and most didn’t, and that the impact on folks’ mental privacy was considerably more invasive than a standard workplace would’ve been, and that the impact on folks’ integrity was probably less bad than my guess at many mainstream workplace’s impact but still a lot worse than the CFAR we ought to aim for.
Personally I am not much trying to maintain the privacy of my own mind at this point, but I am certainly trying to maintain its integrity, and I think being debugged by people with power over me would not be good for that.
This wasn’t my experience at all, personally. I did have some feeling of distance when I first started caring about AI risk in ~2008, but it didn’t get worse across CFAR. I also stayed in a lot of contact with folks outside the CFAR / EA / rationalist / AI risk spheres through almost all of it. I don’t think I looked down on outsiders.
I thought CFAR and MIRI were part of a rare and important thing, but I did not think CFAR (nor CFAR + MIRI) was the only thing to matter. I do think there’s some truth in the “rarity narrative” claim, at CFAR, mostly via me and to a much smaller extent some others at CFAR having some of this view of MIRI.
I agree that this happened and that it was a problem. I didn’t consciously intend to set this up, but my guess is that I did a bunch of things to cause it anyhow. In particular, there’s a certain way I used to sort of take the ground out from under people when we talked, that I think contributed to this. (I used to often do something like: stay cagey about my own opinions; listen carefully to how my interlocutor was modeling the world; show bits of evidence that refuted some of their assumptions; listen to their new model; repeat; … without showing my work. And then they would defer to me, instead of having stubborn opinions I didn’t know how to shift, which on some level was what I wanted.)
People at current-CFAR respect my views still, but it actually feels way healthier to me now. Partly because I’m letting my own views and their causes be more visible, which I think makes it easier to respond to. And because I somehow have less of a feeling of needing to control what other people think or do via changing their views.
(I haven’t checked the above much against others’ perceptions, so would be curious for anyone from current or past CFAR with a take.)
I believe this was your experience, mostly because I’m pretty sure I know who you are (sorry; I didn’t mean to know and won’t make it public) and I can think of at least one over-the-top (but sincere) conversation you could reasonably describe at least sort of this way (except for the “with high confidence”, I guess, and the “frequent”; and some other bits), plus some repeated conflicts. I don’t think this was a common experience, or that it happened much at all (or at all at all?) in contexts not involving you, but it’s possible I’m being an idiot here somehow in which case someone should speak up. Which I guess is to say that the above bullet point seems to me, from my experiences/observations, to be mostly or almost-entirely false, but that I think you’re describing your experiences and guesses about the place accurately and that I appreciate you speaking up.
Anyhow, thanks for writing, and I’m sorry you had bad experiences at CFAR, especially about the fairly substantial parts of the above bad parts that were my fault.
I expect my reply will accidentally make some true points you’re making harder to see (as well as hopefully adding light to some other parts), and I hope you’ll push back in those places.
Related to my reply to PhoenixFriend (in the parent comment), but hopping meta from it:
I have a question for whoever out there thinks they know how the etiquette of this kind of conversation should go. I had a first draft of my reply to PhoenixFriend, where I … basically tried to err on the side of being welcoming, looking for and affirming the elements of truth I could hear in what PhoenixFriend had written, and sort of emphasizing those elements more than my also-real disagreements. I ran it by a CFAR colleague at my colleague’s request, who said something like “look, I think your reply is pretty misleading; you should be louder and clearer about the ways your best guess about what happened differed from what’s described in PhoenixFriend’s comment. Especially since I and others at CFAR have our names on the organization too, so if you phrase things in ways that’ll cause strangers who’re skim-reading to guess that things at CFAR were worse than they were, you’ll inaccurately and unjustly mess with other peoples’ reputations too.” (Paraphrased.)
So then I went back and made my comments more disagreeable and full of details about where my and PhoenixFriend’s models differ. (Though probably still less than the amount that would’ve fully addressed my colleague’s complaints.)
This… seems better in that it addresses my colleague’s pretty reasonable desire, but worse in that it is not welcoming to someone who is trying to share info and is probably finding that hard. I am curious if anyone has good thoughts on how this sort of etiquette should go, if we want to have an illuminating, get-it-all-out-there, non-misleading conversation.
Part of why I’m worried, is it seems to me pretty easy for people who basically think the existing organizations are good, and also that mainstream workplaces are non-damaging and so on, to upvote/downvote each new datum based on those priors plus a (sane and sensible) desire to avoid hurting others’ feelings and reputations without due cause, etc., in ways that despite their reasonability may make it hard for real and needed conversations that are contrary to our current patterns of seeing to get started.
For example, I think PhoenixFriend indeed saw some real things at CFAR that many of those downvoting their comment did not see and mistakenly wouldn’t expect to see, but that also many of the details of PhoenixFriend’s comment are off, partly maybe because they were mis-generalizing from their experiences and partly because it’s hard to name things exactly (especially to people who have a bit of an incentive to mishear.)
(Also, to try briefly and poorly to spell out why I’m rooting for a “get it all out on the table” conversation, and not just a more limited “hear and acknowledge the mostly blatant/known harms, correct those where possible, and leave the rest of our reputation intact” conversation: basically, I think there’s a bunch of built-up “technical debt”, in the form of confusion and mistrust and trying-not-to-talk-about-particular-things-because-others-will-form-“unreasonable”-conflusions-if-we-do and who-knows-why-we-do-that-but-we-do-so-there’s-probably-a-reason, that I’m hoping gets cleared out by the long and IMO relatively high-quality and contentful conversation that’s been happening so far. I want more of that if we can get it. I want culture and groups to be able to build around here without building on top of technical debt. I also want information about how organizations do/don’t work well, and, in terms of means of acquiring this information, I much prefer bad-looking conversations on LW to wasting another five years doing it wrong.)
This sounds like an extreme and surprising statement. I wrote out some clarifying questions like “what do you mean by privacy here”, but maybe it’d be better to just say:
I think it strikes me funny because it sounds sort of like a PR statement. And it sounds like a statement that could set up a sort of “iterations of the Matrix”-like effect. Where, you say “ok now I want to clear out all the miasma, for real”, and then you and your collaborators do a pretty good job at that; but also, something’s been lost or never gained, namely the logical common knowledge that there’s probably-ongoing, probably difficult to see dynamics that give rise to the miasma of {ungrounded shared narrative, information cascades, collective blindspots, deferrals, circular deferrals, misplaced/miscalibrated trust, etc. ??}. In other words, since these things happened in a context where you and your collaborators were already using reflection, introspection, reasoning, communication, etc., we learn that the ongoing accumulation of miasma is a more permanent state of affairs, and this should be common knowledge. Common knowledge would for example help with people being able to bring up information about these dynamics, and expect their information to be put to good use.
(I notice an analogy between iterations of the Matrix and economic boom-bust cycles.)
These statements also seem to imply a framing that potentially has the (presumably unintentional) effect of subtly undermining the common knowledge of ongoing miasma-or-whatever. Like, it sort of directs attention to the content but not the generator, or something; like, one could go through all the “stuff” and then one would be done.
Well, maybe I phrased it poorly; I don’t think what I’m doing is extreme; “much” is doing a bunch of work in my “I am not much trying to...” sentence.
I mean, there’s plenty I don’t want to share, like a normal person. I have confidential info of other peoples that I’m committed to not sharing, and plenty of my own stuff that I am private about for whatever reason. But in terms of rough structural properties of my mind, or most of my beliefs, I’m not much trying for privacy. Like when I imagine being in a context where a bunch of circling is happening or something (circling allows silence/ignoring questions/etc..; still, people sometimes complain that facial expressions leak through and they don’t know how to avoid it), I’m not personally like “I need my privacy though.” And I’ve updated some toward sharing more compared to what I used to do.
Ok, thanks for clarifying. (To reiterate my later point, since it sounds like you’re considering the “narrative pyramid schemes” hypothesis: I think there is not common knowledge that narrative pyramid schemes happen, and that common knowledge might help people continuously and across contexts share more information, especially information that is pulling against the pyramid schemes, by giving them more of a true expectation that they’ll be heard by a something-maximizing person rather than a narrative-executer).
I have concrete thoughts about the specific etiquette of such conversations (they’re not off the cuff; I’ve been thinking more-or-less continuously about this sort of thing for about eight years now).
However, I’m going to hold off for a bit because:
a) Like Anna, I was a part of the dynamics surrounding PhoenixFriend’s experience, and so I don’t want to seize the reins
b) I’ve also had a hard time coordinating with Anna on conversational norms and practices, both while at CFAR and recently
… so I sort of want to not-pretend-I-don’t-have-models-and-opinions-here (I do) but also do something like “wait several days and let other people propose things first” or “wait until directly asked, having made it clear that I have thoughts if people want them” or something.
link to the essay if/when you write it?
I endorse Anna’s commentary.
As a participant of Rationality Minicamp in 2012, I confirm this. Actually, found the old textbook, look here!
Okay, so, that old textbook does not look like a picture of goal-factoring, at least not on that page. But I typed “goal-factoring” into my google drive and got up these old notes that used the word while designing classes for the 2012 minicamps. A rabbithole, but one I enjoyed so maybe others will.
I worked for CFAR full-time from 2014 until mid-to-late 2016 and have continued working as a part-time employee or frequent contractor since. I’m sorry this was your experience. That said, it really does not mesh that much with what I’ve experienced and some of it is almost the opposite of the impressions that I got. Some brief examples:
My experience was that CFAR if anything should have used its techniques internally much more. Double crux for instance felt like it should have been used internally far more than it actually was—one thing that vexed me about CFAR was a sense that there were persistent unresolved major strategic disagreements between staff members that the organization did not seem to prioritize resolving, where I think double crux would have helped.
(I’m not talking about personal disagreements but rather things like “should X set of classes be in the workshop or not?”)
Similarly, goal factoring didn’t see much internal use (I again think it should have been used more!) and Leverage-style “charting” strikes me as really a very different thing from the way CFAR used this sort of stuff.
There was generally little internal “debugging” at all, which contrary to the previous two cases I think is mostly correct—the environment of having your colleagues “debug” you seems pretty weird and questionable. I do think there was at least some of this, but I don’t think it was pervasive or mandatory in the organization and I mostly avoided it.
Far from spending all my time with team members outside of work, I think I spent most of my leisure and social time with people from other groups, many outside the rationalist community. To some degree I (and I think some others) would have liked for the staff to be tighter-knit, but that wasn’t really the culture. Most CFAR staff members did not necessarily know much about my personal life and I did not know much about theirs.
I do not much venerate the founding team or consider them to be ultimate masters or whatever. There was a period early on when I was first working there where I sort of assumed everyone was more advanced than they actually were, but this faded with time. I think what you might consider “lionizing parables” I might consider “examples of people using the techniques in their own lives”. Here is a sample example of this type I’ve given many times at workshops as part of the TAPs class, the reader can decide whether it is a “lionizing parable” or not (note: exact wording may vary):
It can be useful to practice TAPs by actually physically practicing! I believe <a previous instructor’s name> once wanted to set up a TAP involving something they wanted to do after getting out of bed in the morning, so they actually turned off all the lights in their room, got into bed as if they were sleeping, set an alarm to go off as if it were the morning, then waited in bed for the alarm to go off, got up, did the action they were practicing… and then set the whole thing up again and repeated!
I’m very confused by what you deem “narrativemancy” here. I have encountered the term before but I don’t think it was intentionally taught as a CFAR technique or used internally as an explicit technique. IIRC the term also had at least somewhat negative valence.
I should clarify that I have been less involved in “day-to-day” CFAR stuff since mid-late 2016, though I have been at I believe a large majority of mainline workshops (I think I’m one of the most active instructors). It’s possible that the things you describe were occurring but in ways that I didn’t see. That said, they really don’t match with my picture of what working at CFAR was like.
I’ve worked at CFAR for most of the last 5 years, and this comment strikes me as so wildly incorrect and misleading that I have trouble believing it was in fact written by a current CFAR employee. Would you be willing to verify your identity with some mutually-trusted 3rd party, who can confirm your report here? Ben Pace has offered to do this for people in the past.
I don’t know if you trust me, but I confirmed privately that this person is a past or present CFAR employee.
Sure, but they led with “I’m a CFAR employee,” which suggests they are a CFAR employee. Is this true?
It sounds like they meant they used to work at CFAR, not that they currently do.
Also given the very small number of people who work at CFAR currently, it would be very hard for this person to retain anonymity with that qualifier so…
I think it’s safe to assume they were a past employee… but they should probably update their comment to make that clearer because I was also perplexed by their specific phrasing.
The interpretation of “I’m a CFAR employee commenting anonymously to avoid retribution” as “I’m not a CFAR employee, but used to be one” seems to me to be sufficiently strained and non-obvious that we should infer from the commenter’s choice not to use clearer language that they should be treated as having deliberately intended for readers to believe that they’re a current CFAR employee.
I like the local discourse norm of erring on the side of assuming good faith, but like steven0461, in this case I have trouble believing this was misleading by accident. Given how obviously false, or at least seriously misleading, many of these claims are (as I think accurately described by Anna/Duncan/Eli), my lead hypothesis is that this post was written by a former staff member, who was posing as a current staff member to make the critique seem more damning/informed, who had some ax to grind and was willing to engage in deception to get it ground, or something like that...?
It seems misleading in a non-accidental way, but it seems fairly plausible that their main motive was to obscure their identity.
FYI I just interpreted it to mean “former staff member” automatically. (This is biased by my belief that CFAR has very few current staff members so of course it was highly unlikely to be one, but I don’t think it was an unreasonably weird reading)
PhoenixFriend edited the comment.
While it’s true that there’s some structural similarity between Goal Factoring and Connection Theory, and Geoff did teach Goal Factoring at some workshops (including one I attended), these techniques are more different than they are similar. In particular, goal factoring is taught as a solo technique for introspecting on what you want in a specific area, while Connection Theory is a therapy-like technique in which a facilitator tries to comprehensively catalog someone’s values across multiple sessions going 10+ hours.
Thanks for this reply, Jim; I winced a bit at my own “no resemblance whatsoever” and your comment is clearer and more accurate.
I don’t have an object-level opinion formed on this yet, but want to +1 this as more of the kind of description I find interesting, and isn’t subject to the same critiques I had with the original post.
Thanks for this.
I’m interested in figuring out more what’s going on here—how do you feel about emailing me, hashing out the privacy issues, and, if we can get them hashed out, you telling me the four people you’re thinking of who had psychotic episodes?
Update: I interviewed many of the people involved and feel like I understand the situation better.
My main conclusion is that I was wrong about Michael making people psychotic. Everyone I talked to had some other risk factor, like a preexisting family or personal history, or took recreational drugs at doses that would explain their psychotic episodes.
Michael has a tendency to befriend people with high trait psychoticism and heavy drug use, and often has strong opinions on their treatment, which explains why he is often very close to people and very noticeable at the moment they become psychotic. But aside from one case where he recommended someone take a drug that made a bad situation slightly worse, and the general Berkeley rationalist scene that he (and I and everyone else here) is a part of having lots of crazy ideas that are psychologically stressful, I no longer think he is a major cause.
While interviewing the people involved, I did get some additional reasons to worry that he uses cult-y high-pressure recruitment tactics on people he wants things from, in ways that make me continue to be nervous about the effect he *could* have on people. But the original claim I made that I knew of specific cases of psychosis which he substantially helped precipitate turned out to be wrong, and I apologize to him and to Jessica. Jessica’s later post https://www.lesswrong.com/posts/pQGFeKvjydztpgnsY/occupational-infohazards explained in more detail what happened to her, including the role of MIRI and of Michael and his friends, and everything she said there matches what I found too. Insofar as anything I wrote above produces impressions that differs from her explanation, assume that she is right and I am wrong.
Since the interviews involve a lot of private people’s private details, I won’t be posting anything more substantial than this publicly without a lot of thoughts and discussion. If for some reason this is important to you, let me know and I can send you a more detailed summary of my thoughts.
I’m deliberately leaving this comment in this obscure place for now while I talk to Michael and Jessica about whether they would prefer a more public apology that also brings all of this back to people’s attention again.
I want to summarize what’s happened from the point of view of a long time MIRI donor and supporter:
My primary takeaway of the original post was that MIRI/CFAR had cultish social dynamics, that this lead to the spread of short term AI timelines in excess of the evidence, and that voices such as Vassar’s were marginalized (because listening to other arguments would cause them to “downvote Eliezer in his head”). The actual important parts of this whole story are a) the rationalistic health of these organizations, b) the (possibly improper) memetic spread of the short timelines narrative.
It has been months since the OP, but my recollection is that Jessica posted this memoir, got a ton of upvotes, then you posted your comment claiming that being around Vassar induced psychosis, the karma on Jessica’s post dropped in half while your comment that Vassar had magical psychosis inducing powers is currently sitting at almost five and a half times the karma of the OP. At this point, things became mostly derailed into psychodrama about Vassar, drugs, whether transgender people have higher rates of psychosis, et cetera, instead of discussion about the health of these organizations and how short AI timelines came to be the dominant assumption in this community.
I do not actually care about the Vassar matter per say. I think you should try to make amends with him and Jessica, and I trust that you will attempt to do so. But all the personal drama is inconsequential next to the question of whether MIRI and CFAR have good epistemics and how the short timelines meme became widely believed. I would ask that any amends you try to make also address that your comment also derailed these very vital discussions.
Thanks so much for talking to the folks involved and writing this note on your conclusions, I really appreciate that someone did this (who I trust to actually try to find out what happened and report their conclusions accurately).
...
This does not contradict “Michael making people psychotic”. A bad therapist is not excused by the fact that his patients were already sick when they came to him.
Disclaimer: I do not know any of the people involved and have had no personal dealings with any of them.
I’ve seen the term used a few times on LW. Despite the denotational usefulness, it’s very hard to keep it from connotationally being a slur, not without something like there being an existing slur and the new term getting defined to be its denotational non-slur counterpart (how it actually sounds also doesn’t help).
So it’s a good principle to not give it power by using it (at least in public).
You contributing to this conversation seems good, PhoenixFriend. Thanks for saying your piece.
I remember someone who lived in Berkeley in 2016-2017, who wasn’t a CFAR employee but was definitely talking extensively with CFAR people (collaborating on rationality techniques/instruction?) and had gone to a CFAR workshop, telling me something along the lines of “CFAR can’t legally recommend that people try LSD, but...”; I don’t remember what followed the “but”, I don’t think the specific wording was even intended to be remembered (to preserve plausible deniability?), but it gave me the impression that CFAR people may have recommended it if it were legal to do so, as implied by the “but”. This was before I was talking with Michael Vassar extensively. This is some amount of Bayesian evidence for the above.
It’s true some CFAR staff have used psychedelics, and I’m sure they’ve sometimes mentioned that in private conversation. But CFAR as an institution never advocated psychedelic use, and that wasn’t just because it was illegal, it was because (and our mentorship and instructor trainings emphasize this) psychedelics often harm people.
I’d be interested in hearing from someone who was around CFAR in the first few years to double check that the same norm was in place. I wasn’t around before 2015.
I had significant involvement with CFAR 2014-2015 and this is consistent with my impression.
What does “significant involvement” mean here? I worked for CFAR full-time during that period and to the best of my knowledge you did not work there—I believe for some of that time you were dating someone who worked there, is that what you mean by significant involvement?
I remember being a “guest instructor” at one workshop, and talking about curriculum design with Anna and Kenzi. I was also at a lot of official and unofficial CFAR retreats/workshops/etc. I don’t think I participated in much of the normal/official CFAR process, though I did attend the “train the trainers workshop”, and in this range of contexts saw some of how decisions were made, how workshops were run, how people related to each other at parties.
As I recall it, what I observed first-hand and was told second-hand at the time confirms bullets 2, 4, and 6 of the top-level comment. Many of the others are about how people felt, and are consistent with what people I knew reported at the time. Nothing in the top-level comment seems dissonant with what I observed.
It seems like there was a lot of fragmentation (which is why we mostly didn’t interact). I felt bad about exercising (a small amount of) unaccountable influence at the time through these mechanisms, but I was confused about so much relative to the rate at which I was willing to ask questions that I didn’t end up asking about the info-siloing. In hindsight it seems intended to keep the true nature of governance obscure and therefore unaccountable. I did see or at least hear reports of Anna pretending to give different people authority over things and then intervening if they weren’t doing the thing she expected, which is consistent with that hypothesis.
I’m afraid I don’t remember a lot of details beyond this, I had a lot going on that year aside from CFAR.
My comment initially said 2014-2016 but IIRC my involvement was much less after 2015 so I edited it.
I would like a lot more elaboration about this, if you can give it.
Can you say more specifically what you observed?
Unfortunately I think the working relationship between Anna and Kenzi was exceptionally bad in some ways and I would definitely believe that someone who mostly observed that would assume the organization had some of these problems; however I think this was also a relatively unique situation within the organization.
(I suspect though am not certain that both Anna and Kenzi would affirm that indeed this was an especially bad dynamic.)
With respect to point 2, I do not believe there was major peer pressure at CFAR to use psychadelics and I have never used psychadelics myself. It’s possible that there was major peer pressure on other people or it applied to me but I was oblivious to it or whatever but I’d be surprised.
Psychadelic use was also one of a few things that were heavily discouraged (or maybe banned?) as conversation topics for staff at workshops—like polyphasic sleep (another heavily discouraged topic), psychadelics were I believe viewed as potentially destabilizing and inappropriate to recommend to participants, plus there are legal issues involved. I personally consider recreational use of psychadelics to be immoral as well.
Thanks for the clarification, I’ve edited mine too.
What do you see as the main sorts of interventions CFAR was organized around? I feel like this is a “different worlds” thing where I ought to be pretty curious what the whole scene looked like to you, what it seemed like people were up to, what the important activities were, & where progress was being made (or attempted).
I think that CFAR, at least while I was there full-time from 2014 to sometime in 2016, was heavily focused on running workshops or other programs (like the alumni reunions or the MIRI Summer Fellows program). See for instance my comment here.
Most of what the organization was doing seemed to involve planning and executing workshops or other programs and teaching the existing curriculum. There were some developments and advancements to the curriculum, but they often came from the workshops or something around them (like followups) rather than a systematic development project. For example, Kenzi once took on the lion’s share of workshop followups for a time, which led to her coming up with new curriculum based on her sense of what the followup participants were missing even after having attended the workshop.
(In the time before I joined there had been significantly more testing of curriculum etc. outside of workshops, but this seemed to have become less the thing by the time I was there.)
A lot of CFAR’s internal focus was on improving operations capacity. There was at one time a narrative that the staff was currently unable to do some of the longer-term development because too much time was spent on last minute scrambles to execute programs, but once operations sufficiently improved, we’d have much more open time to allocate to longer-term development.
I was skeptical of this and I think ultimately vindicated—CFAR made major improvements to its operations, but this did not lead to systematic research and development emerging, though it did allow for running more programs and doing so more smoothly.