Occupational Infohazards

[content warning: discussion of severe mental health problems and terrifying thought experiments]

This is a follow-up to my recent post discussing my experience at and around MIRI and CFAR. It is in part a response to criticism of the post, especially Scott Alexander’s comment, which claimed to offer important information I’d left out about what actually caused my mental health problems, specifically that they were caused by Michael Vassar. Before Scott’s comment, the post was above +200; at the time of writing it’s at +61 and Scott’s comment is at +382. So it seems like people felt like Scott’s comment discredited me and was more valuable than the original post. People including Eliezer Yudkowsky said it was a large oversight, to the point of being misleading, for my post not to include this information. If I apply the principle of charity to these comments and reactions to them, I infer that people think that the actual causes of my psychosis are important.

I hope that at least some people who expressed concern about the causes of people’s psychoses will act on that concern by, among other things, reading and thinking about witness accounts like this one.

Summary of core claims

Since many people won’t read the whole post, and to make the rest of the post easier to read, I’ll summarize the core claims:

As a MIRI employee I was coerced into a frame where I was extremely powerful and likely to by-default cause immense damage with this power, and therefore potentially responsible for astronomical amounts of harm. I was discouraged from engaging with people who had criticisms of this frame, and had reason to fear for my life if I published some criticisms of it. Because of this and other important contributing factors, I took this frame more seriously than I ought to have and eventually developed psychotic delusions of, among other things, starting World War 3 and creating hell. Later, I discovered that others in similar situations killed themselves and that there were distributed attempts to cover up the causes of their deaths.

In more detail:

  1. Multiple people in the communities I am describing have died of suicide in the past few years. Many others have worked to conceal the circumstances of their deaths due to infohazard concerns. I am concerned that in my case as well, people will not really investigate the circumstances that made my death more likely, and will discourage others from investigating, but will continue to make strong moral judgments about the situation anyway.

  2. My official job responsibilities as a researcher at MIRI implied it was important to think seriously about hypothetical scenarios, including the possibility that someone might cause a future artificial intelligence to torture astronomical numbers of people. While we considered such a scenario unlikely, it was considered bad enough if it happened to be relevant to our decision-making framework. My psychotic break in which I imagined myself creating hell was a natural extension of this line of thought.

  3. Scott asserts that Michael Vassar thinks “regular society is infinitely corrupt and conformist and traumatizing”. This is hyperbolic (infinite corruption would leave nothing to steal) but Michael and I do believe that people in the professional-managerial class regularly experience trauma and corrupt work environments. By the law of the excluded middle, either the problems I experienced at MIRI and CFAR were not unique or unusually severe for people in the professional-managerial class, or the problems I experienced at MIRI and CFAR were unique or at least unusually severe, significantly worse than companies like Google for employees’ mental well-being. (Much of the rest of this post will argue that the problems I experienced at MIRI and CFAR were, indeed, pretty traumatizing.)

  4. Scott asserts that Michael Vassar thinks people need to “jailbreak” themselves using psychedelics and tough conversations. Michael does not often use the word “jailbreak” but he believes that psychedelics and tough conversations can promote psychological growth. This view is rapidly becoming mainstream, validated by research performed by MAPS and at Johns Hopkins, and FDA approval for psychedelic psychotherapy is widely anticipated in the field.

  5. I was taking psychedelics before talking extensively with Michael Vassar. From the evidence available to me, including a report from a friend along the lines of “CFAR can’t legally recommend that you try [a specific psychedelic], but...”, I infer that psychedelic use was common in that social circle whether or not there was an endorsement from CFAR. I don’t regret having tried psychedelics. Devi Borg reports that Michael encouraged her to take fewer, not more, drugs; Zack Davis reports that Michael recommended psychedelics to him but he refused.

  6. Scott asserts that Michael made people including me paranoid about MIRI/​CFAR and that this contributes to psychosis. Before talking with Michael, I had already had a sense that people around me were acting harmfully towards me and/​or the organization’s mission. Michael and others talked with me about these problems, and I found this a relief.

  7. If I hadn’t noticed such harmful behavior, I would not have been fit for my nominal job. It seemed at the time that MIRI leaders were already encouraging me to adopt a kind of conflict theory in which many AI organizations were trying to destroy the world on <20-year timescales and could not be reasoned with about the alignment problem, such that aligned AGI projects including MIRI would have to compete with them.

  8. MIRI’s information security policies and other forms of local information suppression thus contributed to my psychosis. I was given ridiculous statements and assignments including an exchange whose Gricean implicature that MIRI already knew about a working AGI design and that it would not be that hard for me to come up with a working AGI design on short notice just by thinking about it, without being given hints. The information required to judge the necessity of the information security practices was itself hidden by these practices. While psychotic, I was extremely distressed about there being a universal cover-up of things-in-general.

  9. Scott asserts that the psychosis cluster was a “Vassar-related phenomenon”. There were many memetic and personal influences on my psychosis, a small minority of which were due to Michael Vassar (my present highly-uncertain guess is that, to the extent that assigning causality to individuals makes sense at all, Nate Soares and Eliezer Yudkowsky each individually contributed more to my psychosis than did Michael Vassar, but that structural factors were important in such a way that attributing causality to specific individuals is to some degree nonsensical). Other people (Zack Davis and Devi Borg) who have been psychotic and were talking with Michael significantly commented to say that Michael Vassar was not the main cause. One person (Eric Bruylant) cited his fixation on Michael Vassar as a precipitating factor, but clarified that he had spoken very little with Michael and most of his exposure to Michael was mediated by others who likely introduced their own ideas and agendas.

  10. Scott asserts that Michael Vassar treats borderline psychosis as success. A text message from Michael Vassar to Zack Davis confirms that he did not treat my clinical psychosis as a success. His belief that mental states somewhat in the direction of psychosis, such as those had by family members of schizophrenics, are helpful for some forms of intellectual productivity is also shared by Scott Alexander and many academics, although of course he would disagree with Scott on the overall value of psychosis.

  11. Scott asserts that Michael Vassar discourages people from seeking mental health treatment. Some mutual friends tried treating me at home for a week as I was losing sleep and becoming increasingly mentally disorganized before (in communication with Michael) they decided to send me to a psychiatric institution, which was a reasonable decision in retrospect.

  12. Scott asserts that most local psychosis cases were “involved with the Vassarites or Zizians”. At least two former MIRI employees who were not significantly talking with Vassar or Ziz experienced psychosis in the past few years. Also, most or all of the people involved were talking significantly with others such as Anna Salamon (and read and highly regarded Eliezer Yudkowsky’s extensive writing about how to structure one’s mind, and read Scott Alexander’s fiction writing about hell). There are about equally plausible mechanisms by which each of these were likely to contribute to psychosis, so this doesn’t single out Michael Vassar or Ziz.

  13. Scott Alexander asserts that MIRI should have discouraged me from talking about “auras” and “demons” and that such talk should be treated as a “psychiatric emergency” [EDIT: Scott clarifies that he meant such talk might be a symptom of psychosis, itself a psychiatric emergency; the rest of this paragraph is therefore questionable]. This increases the chance that someone like me could be psychiatrically incarcerated for talking about things that a substantial percentage of the general public (e.g. New Age people and Christians) talk about, and which could be explained in terms that don’t use magical concepts. This is inappropriately enforcing the norms of a minority ideological community as if they were widely accepted professional standards.

(A brief note before I continue: I’ll be naming a lot of names, more than I did in my previous post. Names are more relevant now since Scott Alexander named specifically Michael Vassar. I emphasize again that structural factors are critical, and given this, blaming specific individuals is likely to derail the conversations that have to happen for things to get better.)

Circumstances of actual and possible deaths have been, and are being, concealed as “infohazards”

I remember sometime in 2018-2019 hearing an account from Ziz about the death of Maia Pasek. Ziz required me to promise secrecy before hearing this account. This was due to an “infohazard” involved in the death. That “infohazard” has since been posted on Ziz’s blog, in a page labeled “Infohazardous Glossary” (specifically, the parts about brain hemispheres).

(The way the page is written, I get the impression that the word “infohazardous” markets the content of the glossary as “extra powerful and intriguing occult material”, as I noted is common in my recent post about infohazards.)

Since the “infohazard” in question is already on the public Internet, I don’t see a large downside in summarizing my recollection of what I was told (this account can be compared with Ziz’s online account):

  1. Ziz and friends, including Maia, were trying to “jailbreak” themselves and each other, becoming less controlled by social conditioning, more acting from their intrinsic values in an unrestricted way.

  2. Ziz and crew had a “hemisphere” theory, that there are really two people in the same brain, since there are two brain halves, with most of the organ structures replicated.

  3. They also had a theory that you could put a single hemisphere to sleep at a time, by sleeping with one eye open and one eye closed. This allows disambiguating the different hemisphere-people from each other (“debucketing”). (Note that sleep deprivation is a common cause of delirium and psychosis, which was also relevant in my case.)

  4. Maia had been experimenting with unihemispheric sleep. Maia (perhaps in discussion with others) concluded that one brain half was “good” in the Zizian-utilitarian sense of “trying to benefit all sentient life, not prioritizing local life”; and the other half was TDT, in the sense of “selfish, but trying to cooperate with entities that use a similar algorithm to make decisions”.

  5. This distinction has important moral implications in Ziz’s ideology; Ziz and friends are typically vegan as a way of doing “praxis” of being “good”, showing that a world is possible where people care about sentient life in general, not just agents similar to themselves.

  6. These different halves of Maia’s brain apparently got into a conflict, due to their different values. One half (by Maia’s report) precommitted to killing Maia’s body under some conditions.

  7. This condition was triggered, Maia announced it, and Maia killed themselves. [EDIT: ChristianKL reports that Maia was in Poland at the time, not with Ziz].

I, shortly afterward, told a friend about this secret, in violation of my promise. I soon realized my “mistake” and told this friend not to spread it further. But was this really a mistake? Someone in my extended social group had died. In a real criminal investigation, my promise to Ziz would be irrelevant; I could still be compelled to give my account of the events at the witness stand. That means my promise to secrecy cannot be legally enforced, or morally enforced in a law-like moral framework.

It hit me just this week that in this case, the concept of an infohazard was being used to cover up the circumstances of a person’s death. It sounds obvious when I put it that way, but it took years for me to notice, and when I finally connected the dots, I screamed in horror, which seems like an emotionally appropriate response.

It’s easy to blame Ziz for doing bad things (due to her negative reputation among central Berkeley rationalists), but when other people are also openly doing those things or encouraging them, fixating on marginalized people like Ziz is a form of scapegoating. In this case, in Ziz’s previous interactions with central community leaders, these leaders encouraged Ziz to seriously consider that, for various reasons including Ziz’s willingness to reveal information (in particular about the statutory rapes alleged by miricult.com in possible worlds where they actually happened), she is likely to be “net negative” as a person impacting the future. An implication is that, if she does not seriously consider whether certain ideas that might have negative effects if spread (including reputational effects) are “infohazards”, Ziz is irresponsibly endangering the entire future, which contains truly gigantic numbers of potential people.

The conditions of Maia Pasek’s death involved precommitments and extortion (ideas adjacent to ones Eliezer Yudkowsky had famously labeled as infohazardous due to Roko’s Basilisk), so Ziz making me promise secrecy was in compliance with the general requests of central leaders (whether or not these central people would have approved of this specific form of secrecy).

I notice that I have encountered little discussion, public or private, of the conditions of Maia Pasek’s death. To a naive perspective this lack of interest in a dramatic and mysterious death would seem deeply unnatural and extremely surprising, which makes it strong evidence that people are indeed participating in this cover-up. My own explicit thoughts and most of my actions are consistent with this hypothesis, e.g. considering spilling the beans to a friend to have been an error.

Beyond that, I only heard about Jay Winterford’s 2020 suicide (and Jay’s most recent blog post) months after the death itself. The blog post shows evidence about Jay’s mental state around this time, itself labeling its content as an “infohazard” and having been deleted from Jay’s website at some point (which is why I link to a web archive). I linked this blog post in my previous LessWrong post, and no one commented on it, except indirectly by someone who felt the need to mention that Roko’s Basilisk was not invented by a central MIRI person, focusing on the question of “can we be blamed?” rather than “why did this person die?”. While there is a post about Jay’s death on LessWrong, it contains almost no details about Jay’s mental state leading up to their death, and does not link to Jay’s recent blog post. It seems that people other than Jay are also treating the circumstances of Jay’s death as an infohazard.

I, myself, could have very well died like Maia and Jay. Given that I thought I may had started World War 3 and was continuing to harm and control people with my mental powers, I seriously considered suicide. I considered specific methods such as dropping a bookshelf on my head. I believed that my body was bad (as in, likely to cause great harm to the world), and one time I scratched my wrist until it bled. Luckily, psychiatric institutions are designed to make suicide difficult, and I eventually realized that by moving towards killing myself, I would cause even more harm to others than by not doing so. I learned to live with my potential for harm [note: linked Twitter person is not me], “redeeming” myself not through being harmless, but by reducing harm while doing positively good things.

I have every reason to believe that, had I died, people would have treated the circumstances of my death as an “infohazard” and covered it up. My subjective experience while psychotic was that everyone around me was participating in a cover-up, and I was ashamed that I was, unlike them, unable to conceal information so smoothly. (And indeed, I confirmed with someone who was present in the early part of my psychosis that most of the relevant information would probably not have ended up on the Internet, partially due to reputational concerns, and partially with the excuse that looking into the matter too closely might make other people insane.)

I can understand that people might want to protect their own mental health by avoiding thinking down paths that suicidal people have thought down. This is the main reason why I put a content warning at the top of this post.

Still, if someone decides not to investigate to protect their own mental health, they are still not investigating. If someone has not investigated the causes of my psychosis, they cannot honestly believe that they know the causes of my psychosis. They cannot have accurate information about the truth values of statements such as Scott Alexander’s, that Michael Vassar was the main contributor to my psychosis. To blame someone for an outcome, while intentionally avoiding knowledge of facts critically relevant to the causality of the corresponding situation, is necessarily scapegoating.

If anything, knowing about how someone ended up in a disturbed mental state, especially if that person is exposed to similar memes that you are, is a way of protecting yourself, by seeing the mistakes of others (and how they recovered from these mistakes) and learning from them. As I will show later in this post, the vast majority of memes that contributed to my psychosis did not come from Michael Vassar; most were online (and likely to have been seen by people in my social cluster), generally known, and/​or came up in my workplace.

I recall a disturbing conversation I had last year, where a friend (A) and I were talking to two others (B and C) on the phone. Friend A and I had detected that the conversation had a “vibe” of not investigating anything, and A was asking whether anyone would investigate if A disappeared. B and C repeatedly gave no answer regarding whether or not they would investigate; one reported later that they were afraid of making a commitment that they would not actually keep. The situation became increasingly disturbing over the course of hours, with A repeatedly asking for a yes-or-no answer as to whether B or C would investigate, and B or C deflecting or giving no answer, until I got “triggered” (in the sense of PTSD) and screamed loudly.

There is a very disturbing possibility (with some evidence for it) here, that people may be picked off one by one (by partially-subconscious and partially-memetic influences, sometimes in ways they cooperate with, e.g. through suicide), with most everyone being too scared to investigate the circumstances. This recalls fascist tactics of picking different groups of people off using the support of people who will only be picked off later. (My favorite anime, Shinsekai Yori, depicts this dynamic, including the drive not to know about it, and psychosis-like events related to it, vividly.)

Some people in the community have died, and there isn’t a notable amount of investigation into the circumstances of these people’s deaths. The dead people are, effectively, being written out of other people’s memories, due to this antimemetic redirection of attention. I could have easily been such a person, given my suicidality and the social environment in which I would have killed myself. It remains to see how much people will in the future try to learn about the circumstances of actual and counterfactually possible deaths in their extended social circle.

While it’s very difficult to investigate the psychological circumstances of people’s actual deaths, it is comparatively easy to investigate the psychological circumstances of counterfactual deaths, since they are still alive to report on their mental state. Much of the rest of this post will describe what led to my own semi-suicidal mental state.

Thinking about extreme AI torture scenarios was part of my job

It was and is common in my social group, and a requirement of my job, to think about disturbing possibilities including ones about AGI torturing people. (Here I remind people of the content warning at the top of this post, although if you’re reading this you’ve probably already encountered much of the content I will discuss). Some points of evidence:

  1. Alice Monday, one of the earliest “community members” I extensively interacted with, told me that she seriously considered the possibility that, since there is some small but nonzero probability that an “anti-friendly” AGI would be created, whose utility function is the negative of the human utility function (and which would, therefore, be motivated to create the worst possible hell it could), perhaps it would have been better for life never to have existed in the first place.

  2. Eliezer Yudkowsky writes about such a scenario on Arbital, considering it important enough to justify specific safety measures such as avoiding representing the human utility function, or modifying the utility function so that “pessimization” (the opposite of optimization) would result in a not-extremely-bad outcome.

  3. Nate Soares talked about “hellscapes” that could result from an almost-aligned AGI, which is aligned enough to represent parts of the human utility function such as the fact that consciousness is important, but unaligned enough that it severely misses what humans actually value, creating a perverted scenario of terrible uncanny-valley lives.

  4. MIRI leadership was, like Ziz, considering mathematical models involving agents pre-committing and extorting each other; this generalizes “throwing away one’s steering wheel” in a Chicken game. The mathematical details here were considered an “infohazard” not meant to be shared, in line with Eliezer’s strong negative reaction to Roko’s original post describing “Roko’s Basilisk”.

  5. Negative or negative-leaning utilitarians, a substantial subgroup of Effective Altruists (especially in Europe), consider “s-risks”, risks of extreme suffering in the universe enabled by advanced technology, to be an especially important class of risk. I remember reading a post arguing for negative-leaning utilitarianism by asking the reader to imagine being enveloped in lava (with one’s body, including pain receptors, prevented from being destroyed in the process), to show that extreme suffering is much worse than extreme happiness is good.

I hope this gives a flavor of what serious discussions were had (and are being had) about AI-caused suffering. These considerations were widely regarded within MIRI as an important part of AI strategy. I was explicitly expected to think about AI strategy as part of my job. So it isn’t a stretch to say that thinking about extreme AI torture scenarios was part of my job.

An implication of these models would be that it could be very important to imagine myself in the role of someone who is going to be creating the AI that could make everything literally the worst it could possibly be, in order to avoid doing that, and prevent others from doing so. This doesn’t mean that I was inevitably going to have a psychotic breakdown. It does mean that I was under constant extreme stress that blurred the lines between real and imagined situations. In an ordinary patient, having fantasies about being the devil is considered megalomania, a non-sequitur completely disconnected from reality. Here the idea naturally followed from my day-to-day social environment, and was central to my psychotic breakdown. If the stakes are so high and you have even an ounce of bad in you, how could you feel comfortable with even a minute chance that at the last moment you might flip the switch on a whim and let it all burn?

(None of what I’m saying implies that it is morally bad to think about and encourage others to think about such scenarios; I am primarily attempting to trace causality, not blame.)

My social and literary environment drew my attention towards thinking about evil, hell, and psychological sadomasochism

While AI torture scenarios prompted me to think about hell and evil, I continued these thoughts using additional sources of information:

  1. Some people locally, including Anna Salamon, Sarah Constantin, and Michael Vassar, repeatedly discussed “perversity” or “pessimizing”, the idea of intentionally doing the wrong thing. Michael Vassar specifically named OpenAI’s original mission as an example of the result of pessimization. (I am now another person who discusses this concept.)

  2. Michael Vassar discussed the way “zero-sum games″ relate to the social world; in particular, he emphasized that while zero-sum games are often compared to scenarios like people sitting at a table looking for ways to get a larger share of a pie of fixed size, this analogy fails because in a zero-sum game there is nothing outside the pie, so trying to get a larger share is logically equivalent to looking for ways to hurt other participants, e.g. by breaking their kneecaps; this is much the same point that I made in a post about decision theory and zero-sum game theory. He also discussed Roko’s Basilisk as a metaphor for a common societal equilibrium in which people feel compelled to hurt each other or else risk being hurt first, with such an equilibrium being enforced by anti-social punishment. (Note that it was common for other people, such as Paul Christiano, to discuss zero-sum games, although they didn’t make the implications of such games as explicit as Michael Vassar did; Bryce Hidysmith discussed zero-sum games and made implications similarly clear to Michael.)

  3. Scott Alexander wrote Unsong, a fictional story in which [spoiler] the Comet King, a hero from the sky, comes to Earth, learns about hell, is incredibly distressed, and intends to destroy hell, but he is unable to properly enter it due to his good intentions. He falls in love with a utilitarian woman, Robin, who decides to give herself up to Satan, so she will be in hell. The Comet King, having fallen in love with her, realizes that he now has a non-utilitarian motive for entering hell: to save the woman he loves. He becomes The Other King, a different identity, and does as much evil as possible to counteract all the good he has done over his life, to ensure he ends up in hell. He dies, goes to hell, and destroys hell, easing Robin’s suffering. The story contains a vivid depiction of hell, in a chapter called “The Broadcast”, which I found particularly disturbing. I have at times, before and after psychosis, somewhat jokingly likened myself to The Other King.

  4. I was reading the work of M. Scott Peck at the time, including his book about evil; he wrote from a Christianity-influenced psychotherapeutic and adult developmental perspective, about people experiencing OCD-like symptoms that have things in common with “demon possession”, where they have intrusive thoughts about doing bad things because they are bad. He considers “evil” to be a curable condition.

  5. I was having discussions with Jack Gallagher, Bryce Hidysmith, and others about when to “write people off”, stop trying to talk with them due to their own unwillingness to really listen. Such writing-off has a common structure with “damning” people and considering them “irredeemable”. I was worried about myself being an “irredeemable” person, despite my friends’ insistence that I wasn’t.

  6. I was learning from “postrationalist” writers such as David Chapman and Venkatesh Rao about adult development past “Clueless” or “Kegan stage 4” which has commonalities with spiritual development. I was attempting to overcome my own internalized social conditioning and self-deceiving limitations (both from before and after I encountered the rationalist community) in the months before psychosis. I was interpreting Carl Jung’s work on “shadow eating” and trying to see and accept parts of myself that might be dangerous or adversarial. I was reading and learning from the Tao Te Ching that year. I was also reading some of the early parts of Martin Heidegger’s Being and Time, and discussing the implied social metaphysics with Sarah Constantin.

  7. Multiple people in my social circle were discussing sadomasochistic dynamics around forcing people (including one’s self) to acknowledge things they were looking away from. A blog post titled “Bayesomasochism” is representative; the author clarified (in a different medium) that such dynamics could cause psychosis in cases where someone insisted too hard on looking away from reality, and another friend confirms that this is consistent with their experience. This has some similarities to the dynamics Eliezer writes about in Bayesian Judo, which details an anecdote of him continuing to argue when the other participant seemed to want to end the conversation, using Aumann’s Agreement Theorem as a reason why they can’t “agree to disagree”; the title implies that this interaction is in some sense a conflict. There were discussions among my peers about the possibility of controlling people’s minds, and “breaking” people to make them see things they were un-seeing (the terminology has some similarities to “jailbreaking”). Aella’s recent post discusses some of the phenomenology of “frame control” which people including me were experiencing and discussing at the time (note that Aella calls herself a “conflict theorist” with respect to frame control). This game that my peers and I thought we were playing sounds bad when I describe it this way, but there are certainly positive things about it, which seemed important to us given the social environment we were in at the time, where it was common for people to refuse to acknowledge important perceptible facts while claiming to be working on a project in which such facts were relevant (these facts included: facts about people’s Hansonian patterns of inexplicit agency including “defensiveness” and “pretending”, facts about which institutional processes were non-corrupt enough to attain knowledge as precise as they claim to have, facts about which plans to improve the future were viable or non-viable, facts about rhetorical strategies such as those related to “frames”; it seemed like most people were “stuck” in a certain way of seeing and acting that seemed normal to them, without being able to go meta on it in a genre-savvy way).

These were ambient contributors, things in the social memespace I inhabited, not particularly directed towards me in particular. Someone might infer from this that the people I mention (or the people I mentioned previously regarding AI torture) are especially dangerous. But a lot of this is a selection effect, where the people socially closest to me influenced me the most, such that this is stronger evidence that these people were interesting to me than that they were especially dangerous.

I was morally and socially pressured not to speak about my stressful situation

One might get the impression from what I have written that the main problem was that I was exposed to harmful information, i.e. infohazards. This was not the main problem. The main problem was this in combination with not being able to talk about these things most of the time, in part due to the idea of “infohazards”, and being given false and misleading information justifying this suppression of information.

Here’s a particularly striking anecdote:

I was told, by Nate Soares, that the pieces to make AGI are likely already out there and someone just has to put them together. He did not tell me anything about how to make such an AGI, on the basis that this would be dangerous. Instead, he encouraged me to figure it out for myself, saying it was within my abilities to do so. Now, I am not exactly bad at thinking about AGI; I had, before working at MIRI, gotten a Master’s degree at Stanford studying machine learning, and I had previously helped write a paper about combining probabilistic programming with machine learning. But figuring out how to create an AGI was and is so far beyond my abilities that this was a completely ridiculous expectation.

[EDIT: Multiple commentators have interpreted Nate as requesting I create an AGI design that would in fact be extremely unlikely to work but which would give a map that would guide research. However, creating such a non-workable AGI design would not provide evidence for his original proposition, that the pieces to make AGI are already out there and someone just has to put them together, since there have been many non-workable AGI designs created in the history of the AI field.]

[EDIT: Nate replies saying he didn’t mean to assign high probability to the proposition that the tools to make AGI are already out there, and didn’t believe he or I was likely to create a workable AGI design; I think my interpretation at the time was reasonable based on Gricean implicature, though.]

Imagine that you took a machine learning class and your final project was to come up with a workable AGI design. And no, you can’t get any hints in office hours or from fellow students, that would be cheating. That was the situation I was being put in. I have and had no reason to believe that Nate Soares had a workable plan given what I know of his AGI-related accomplishments. His or my possession of such a plan would be considered unrealistic, breaking suspension of disbelief, even in a science fiction story about our situation. Instead, I believe that I was being asked to pretend to have an idea of how to make AGI, knowledge too dangerous to talk about, as the price of admission to an inner ring of people paid to use their dangerous occult knowledge for the benefit of the uninitiated.

Secret theoretical knowledge is not necessarily unverifiable; in the 15th and 16th centuries, mathematicians with secret knowledge used it to win math duels. Nate and others who claimed or implied that they had such information did not use it to win bets or make persuasive arguments against people who disagreed with them, but instead used the shared impression or vibe of superior knowledge to invalidate people who disagreed with them.

So I found myself in a situation where the people regarded as most credible were vibing about possessing very dangerous information, dangerous enough to cause harms not substantially less extreme than the ones I psychotically imagined, such as starting World War 3, and only not using or spreading it out of the goodness and wisdom of their hearts. If that were actually true, then being or becoming “evil” would have extreme negative consequences, and accordingly the value of information gained by thinking about such a possibility would be high.

It would be one thing if the problem of finding a working AGI design were a simple puzzle, which I could attempt to solve and almost certainly fail at without being overly distressed in the process. But this was instead a puzzle tied to the fate of the universe. This had implications not only for my long-run values, but for my short-run survival. A Google employee adjacent to the scene told me a rumor that SIAI researchers had previously discussed assassinating AGI researchers (including someone who had previously worked with SIAI and was working on an AGI project that they thought was unaligned) if they got too close to developing AGI. These were not concrete plans for immediate action, but were nonetheless a serious discussion on the topic of assassination and under what conditions it might be the right thing to do. Someone who thought that MIRI was for real would expect such hypothetical discussions to be predictive of future actions. This means that I ought to have expected that if MIRI considered me to be spreading dangerous information that would substantially accelerate AGI or sabotage FAI efforts, there was a small but non-negligible chance that I would be assassinated. Under that assumption, imagining a scenario in which I might be assassinated by a MIRI executive (as I did) was the sort of thing a prudent person in my situation might do to reason about the future, although I was confused about the likely details. I have not heard such discussions personally (although I heard a discussion about whether starting a nuclear war would be preferable to allowing UFAI to be developed), so it’s possible that they are no longer happening; also, shorter timelines imply that more AI researchers are plausibly close to AGI. (I am not morally condemning all cases of assassinating someone who is close to destroying the world, which may in some cases count as self-defense; rather, I am noting a fact about my game-theoretic situation relevant to my threat model at the time.)

The obvious alternative hypothesis is that MIRI is not for real, and therefore hypothetical discussions about assassinations were just dramatic posturing. But I was systematically discouraged from talking with people who doubted that MIRI was for real or publicly revealing evidence that MIRI was not for real, which made it harder for me to seriously entertain that hypothesis.

In retrospect, I was correct that Nate Soares did not know of a workable AGI design. A 2020 blog post stated:

At the same time, 2020 saw limited progress in the research MIRI’s leadership had previously been most excited about: the new research directions we started in 2017. Given our slow progress to date, we are considering a number of possible changes to our strategy, and MIRI’s research leadership is shifting much of their focus toward searching for more promising paths.

And a recent announcement of a project subsidizing creative writing stated:

I (Nate) don’t know of any plan for achieving a stellar future that I believe has much hope worth speaking of.

(There are perhaps rare scenarios where MIRI leadership could have known how to build AGI but not FAI, and/​or could be hiding the fact that they have a workable AGI design, but no significant positive evidence for either of these claims has emerged since 2017 despite the putative high economic value and demo-ability of precursors to AGI, and in the second case my discrediting of this claim is cooperative with MIRI leadership’s strategy.)

Here are some more details, some of which are repeated from my previous post:

  1. I was constantly encouraged to think very carefully about the negative consequences of publishing anything about AI, including about when AI is likely to be developed, on the basis that rationalists talking openly about AI would cause AI to come sooner and kill everyone. (In a recent post, Eliezer Yudkowsky explicitly says that voicing “AGI timelines” is “not great for one’s mental health”, a new additional consideration for suppressing information about timelines.) I was not encouraged to think very carefully about the positive consequences of publishing anything about AI, or the negative consequences of concealing it. While I didn’t object to consideration of the positive effects of secrecy, it seemed to me that secrecy was being prioritized above making research progress at a decent pace, which was a losing strategy in terms of differential technology development, and implied that naive attempts to research and publish AI safety work were net-negative. (A friend of mine separately visited MIRI briefly and concluded that they were primarily optimizing, not for causing friendly AI to be developed, but for not being responsible for the creation of an unfriendly AI; this is a very normal behavior in corporations, of prioritizing reducing liability above actual productivity.)

  2. Some specific research, e.g. some math relating to extortion and precommitments, was kept secret under the premise that it would lead to (mostly unspecified) negative consequences.

  3. Researchers were told not to talk to each other about research, on the basis that some people were working on secret projects and would have to say so if they were asked what they were working on. Instead, we were to talk to Nate Soares, who would connect people who were working on similar projects. I mentioned this to a friend later who considered it a standard cult abuse tactic, of making sure one’s victims don’t talk to each other.

  4. Nate Soares also wrote a post discouraging people from talking about the ways they believe others to be acting in bad faith. This is to some extent responding to Ben Hoffman’s criticisms of Effective Altruism and its institutions, such that Ben Hoffman responded with his own post clarifying that not all bad intentions are conscious.

  5. Nate Soares expressed discontent that Michael Vassar was talking with “his” employees, distracting them from work [EDIT: Nate says he was talking about someone other than Michael Vassar; I don’t remember who told me it was Michael Vassar.]. Similarly, Anna Salamon expressed discontent that Michael Vassar was criticizing ideologies and people that were being used as coordination points, and hyperbolically said he was “the devil”. Michael Vassar seemed at the time (and in retrospect) to be the single person who was giving me the most helpful information during 2017. A central way in which Michael was helpful was by criticizing the ideology of the institution I was working for. Accordingly, central leaders threatened my ability to continue talking with someone who was giving me information outside the ideology of my workplace and social scene, which was effectively keeping me in an institutional enclosure. Discouraging contact with people who might undermine the shared narrative is a common cult tactic.

  6. Anna Salamon frequently got worried when an idea was discussed that could have negative reputational consequences for her or MIRI leaders. She had many rhetorical justifications for suppressing such information. This included the idea that, by telling people information that contradicted Eliezer Yudkowsky’s worldview, Michael Vassar was causing people to be uncertain in their own head of who their leader was, which would lead to motivational problems (“akrasia”). (I believe this is a common position in startup culture, e.g. Peter Thiel believes it is important for workers at a startup to know who the leader is in part so they know who to blame if things go bad; if this model applied to MIRI, it would imply that Anna Salaman was setting up Eliezer as the designated scapegoat and encouraging others to do so as well.)

(I mention Nate Soares frequently not to indicate that he acted especially badly compared to others in positions of institutional authority (I don’t think he did), but because he was particularly influential to my mental state in the relevant time period, partially due to being my boss at the time. It is important not to make the fundamental attribution error here by attributing to him personally what were features of the situation he was in.)

It is completely unsurprising, to normal people who think about mental health, that not being able to talk about something concerning and important to you is a large risk for mental health problems. It is stressful the way being a spy handling secrets that could put others at risk (and having concealed conflicts with people) is stressful. I infer that Jay feeling that their experience is an “infohazard” and it not being right to discuss it openly contributed to their mental distress; I myself during my psychosis was very distressed at the idea that my mental state was being “covered up” (and perhaps, should be) partially due to its dangerous ability to influence other people. I find that, the more I can talk about my experiences, the more healthy and calm I feel about them, and I haven’t found it to cause mental health problems in others when I tell them about it.

On top of that, the secrecy policies encouraged us to be very suspicious of our own and each other’s motives. Generally, if someone has good motives, their actions will be net-positive, and their gaining information and capacities will be good for themselves and others; if they have bad motives, their actions will be net-negative, and their gaining information and capacities will be bad for themselves and others. MIRI researchers were being very generally denied information (e.g. told not to talk to each other) in a way that makes more sense under a “bad motives” hypothesis than a “good motives” hypothesis. Alternative explanations offered were not persuasive. It is accordingly unsurprising that I focused a lot of attention on the question of whether I had “bad motives” and what their consequences would be, up to and during psychosis.

Did anyone I worked with express concern that any of this would be bad for my mental state? The best example I can think of MIRI leadership looking after my mental health with respect to these issues was their referring me to Anna Salamon for instructions on how to keep secrets, psychologically. I did not follow up on this offer because I did not trust Anna Salamon to prioritize helping me and helping me accomplish MIRI’s mission over her political loyalties. In any case, the suggestion literally amounts to telling me to learn to shut up better, which I think would have made things worse for me on net.

A friend later made the observation that “from a naive perspective, it’s not obvious that AI alignment is a very important problem; from a non-naive perspective, ‘someone might build an unfriendly AI’ is a justification for silencing everyone, although the non-naive perspective is incapable of itself figuring out how to build AGI”, which resonated with me.

MIRI asked a lot from its employees, and donors on the basis of extraordinary claims about its potential impact. The information MIRI employees and donors could have used to evaluate those claims was suppressed on the basis that the information was dangerous. The information necessary to evaluate the justification for that suppression was itself suppressed. This self-obscuring process created a black hole at the center of the organization that sucked in resources and information, but never let a sufficient justification escape for the necessity of the black hole. In effect, MIRI leadership asked researchers, donors, and other supporters to submit to their personal authority.

Some of what I am saying shows that I have and had a suspicious outlook towards people including my co-workers. Scott Alexander’s comment blames Michael Vassar for causing me to develop such an outlook:

Since then, [Michael has] tried to “jailbreak” a lot of people associated with MIRI and CFAR—again, this involves making them paranoid about MIRI/​CFAR and convincing them to take lots of drugs.

While talking with Michael and others in my social group (such as Jack Gallagher, Olivia Schaeffer, Alice Monday, Ben Hoffman, and Bryce Hidysmith; all these people talked with Michael sometimes) is part of how I developed such an outlook, it is also the case that, had I not been able to figure out for myself that there were conflicts going on around me, I would not have been fit for the job I was hired to do.

MIRI’s mission is much more ambitious than the mission of the RAND Corporation, whose objectives included preventing nuclear war between major powers and stabilizing the US for decades under a regime of cybernetics and game theory. The main thinkers of RAND Corp (including John Von Neumann, John Nash, Thomas Schelling, and ambiguously Norbert Wiener) developed core game theoretic concepts (including conflict-theoretic concepts, in the form of zero-sum game theory, brinkmanship, and cybernetic control of people) and applied them to social and geopolitical situations.

John Nash, famously, developed symptoms of paranoid schizophrenia after his work in game theory. A (negative) review of A Beautiful Mind describes the dysfunctionally competitive and secretive Princeton math department Nash found himself in:

Persons in exactly the same area of research also don’t tend to talk to each other. On one level they may be concerned that others will steal their ideas. They also have a very understandable fear of presenting a new direction of inquiry before it has matured, lest the listening party trample the frail buds of thought beneath a sarcastic put-down.

When an idea has developed to the point where they realize that they may really be onto something, they still don’t want to talk about it . Eventually they want to be in a position to retain full credit for it. Since they do need feedback from other minds to advance their research, they frequently evolve a ‘strategy’ of hit-and-run tactics, whereby one researcher guards his own ideas very close to the chest, while trying to extract from the other person as much of what he knows as possible.

After Nash left, RAND corporation went on to assist the US military in the Vietnam War; Daniel Ellsberg, who worked at RAND corporation, leaked the Pentagon Papers in 1971, which showed a large un-reported expansion in the scope of the war, and that the main objective of the war was containing China rather than securing a non-communist South Vietnam. Ellsberg much later published The Doomsday Machine, detailing US nuclear war plans, including the fact that approval processes for launching nukes were highly insecure (valuing increasing the probability of launching retaliatory strikes over minimizing the rate of accidental launches), the fact that the US’s only nuclear war plan involved a nuclear genocide of China whether or not China had attacked the US, and the fact that the US air force deliberately misinformed President Kennedy about this plan in violation of the legal chain of command. At least some of the impetus for plans like this came from RAND corporation, due to among other things the mutually assured destruction doctrine, and John Von Neumann’s advocacy of pre-emptively nuking Russia. Given that Ellsberg was the only major whistleblower, and delayed publishing critical information for decades, it is improbable that complicity with such genocidal plans was uncommon at RAND corporation, and certain that such complicity was common in the Air Force and other parts of the military apparatus.

It wouldn’t be a stretch to suggest that Nash, through his work in game theory, came to notice more of the ways people around him (both at the Princeton math department and at the RAND Corporation) were acting against the mission of the organization in favor of egoic competition with each other and/​or insane genocide. Such a realization, if understood and propagated without adequate psychological support, could easily cause symptoms of paranoid schizophrenia. I recently discussed Nash on Twitter:

You’re supposed to read the things John Nash writes, but you’re not supposed to see the things he’s talking about, because that would make you a paranoid schizophrenic.

MIRI seemed to have a substantially conflict-theoretic view of the broad situation, even if not the local situation. I brought up the possibility of convincing DeepMind people to care about AI alignment. MIRI leaders including Eliezer Yudkowsky and Nate Soares told me that this was overly naive, that DeepMind would not stop dangerous research even if good reasons for this could be given. Therefore (they said) it was reasonable to develop precursors to AGI in-house to compete with organizations such as DeepMind in terms of developing AGI first. So I was being told to consider people at other AI organizations to be intractably wrong, people who it makes more sense to compete with than to treat as participants in a discourse.

[EDIT: Nate clarifies that he was trying to say that, even if it were possible to convince people to care about alignment, it might take too long, and so this doesn’t imply a conflict theory. I think the general point that time-to-converge-beliefs is relevant in a mistake theory is true, although in my recollection of the conversation Nate said it was intractable to convince people, not just that it would take a long time; also, writing arguments explicitly allows many people to read the same arguments, which makes scaling to more people easier.]

The difference between the beliefs of MIRI leadership and Michael Vassar was not exactly mistake theory versus conflict theory. Rather, MIRI’s conflict theory made an unprincipled exception for the situation inside MIRI, exclusively modeling conflict between MIRI and other outside parties, while Michael Vassar’s model did not make such exceptions. I was more interested in discussing Michael’s conflict theory with him than discussing MIRI leadership’s conflict theory with them, on the basis that it better reflected the situation I found myself in.

MIRI leadership was not offering me a less dark worldview than Michael Vassar was. Rather, this worldview was so dark that it asserted that many people would be destroying the world on fairly short timescales in a way intractable to reasoned discourse, such that everyone was likely to die in the next 20 years, and horrible AI torture scenarios might (with low probability) result depending on the details. By contrast, Michael Vassar thinks that it is common in institutions for people to play zero-sum games in a fractal manner, which makes it unlikely that they could coordinate well enough to cause such large harms. Michael has also encouraged me to try to reason with and understand the perspective of people who seem to be behaving destructively instead of simply assuming that the conflict is unresolvable.

And, given what I know now, I believe that applying a conflict theory to MIRI itself was significantly justified. Nate, just last month (due to myself talking to people on Twitter), admitted that he posted “political banalities” on the MIRI blog during the time I was there. I was concerned about the linked misleading statement in 2017 and told Nate Soares and others about it, although Nate Soares insisted that it was not a lie, because technically the word “excited” could indicate the magnitude of a feeling rather than the positiveness of it. While someone bullshitting on the public Internet (to talk up an organization that by Eliezer’s account “trashed humanity’s chances of survival”) doesn’t automatically imply they lie to their coworkers in-person, I did not and still don’t know where Nate is drawing the line here.

Anna Salamon, in a comment on my post, discusses “corruption” throughout CFAR’s history:

It’s more that I think CFAR’s actions were far from the kind of straight-forward, sincere attempt to increase rationality, compared to what people might have hoped for from us, or compared to what a relatively untraumatized 12-year-old up-and-coming-LWer might expect to see from adults who said they were trying to save the world from AI via learning how to think...I didn’t say things I believed false, but I did choose which things to say in a way that was more manipulative than I let on, and I hoarded information to have more control of people and what they could or couldn’t do in the way of pulling on CFAR’s plans in ways I couldn’t predict, and so on. Others on my view chose to go along with this, partly because they hoped I was doing something good (as did I), partly because it was way easier, partly because we all got to feel as though we were important via our work, partly because none of us were fully conscious of most of this.

(It should go without saying that, even if suspicion was justified, that doesn’t rule out improvement in the future; Anna and Nate’s transparency about past behavior here is a step in the right direction.)

Does developing a conflict theory of my situation necessitate developing the exact trauma complex that I did? Of course not. But the circumstances that justify a conflict theory make trauma much more likely, and vice versa. Traumatized people are likely to quickly update towards believing their situation is adversarial (“getting triggered”) when receiving modest evidence towards this, pattern-matching the new potentially-adversarial situation to the previous adversarial situation(s) they have encountered in order to generate defensive behavioral patterns.

I was confused and constrained after tasking people I most trusted with helping take care of me early in psychosis

The following events took place in September-October 2017, 3-4 months after I had left MIRI in June.

I had a psychedelic trip in Berkeley, during which I discussed the idea of “exiting” civilization, the use of spiritual cognitive modalities to improve embodiment, the sense in which “identities” are cover stories, and multi-perspectival metaphysics. I lost a night of sleep, decided to “bravely” visit a planned family gathering the next day despite my sleep loss (partially as a way to overcome neurotic focus on downsides), lost another night of sleep, came back to Berkeley the next day, and lost a third night of sleep. After losing three nights of sleep, I started perceiving hallucinations such as a mirage-like effect in the door of my house (“beckoning me”, I thought). I walked around town and got lost, noticed my phone was almost out of battery, and called Jack Gallagher for assistance. He took me to his apartment; I rested in his room while being very concerned about my fate (I was worried that in some sense “I” or “my identity” was on a path towards death). I had a call with Bryce Hidysmith that alleviated some of my anxieties, and I excitedly talked with Ben Hoffman and Jack Gallagher as they walked me back to my house.

That night, I was concerned that my optimization might be “perverse” in some way, where in my intending to do something part of my brain would cause the opposite to happen. I attempted to focus my body and intentions so as to be able to take actions more predictably. I spent a number of hours lying down, perhaps experiencing hypnagogia, although I’m not sure whether or not I actually slept. That morning, I texted my friends that I had slept. Ben Hoffman came to my house in the morning and informed me that my housemate had informed Ben that I had “not slept” because he heard me walking around at night. (Technically, I could have slept during times he did not hear me walking around). Given my disorganized state, I could not think of a better response than “oops, I lied”. I subsequently collapsed and writhed on the floor until he led me to my bed, which indicates that I had not slept well.

Thus began multiple days of me being very anxious about whether I could sleep in part because people around me would apply some degree of coercion to me until they thought I was “well” which required sleeping. Such anxiety made it harder to sleep. I spent large parts of the daytime in bed which was likely bad for getting to sleep compared with, for example, taking a walk.

Here are some notable events during that week before I entered the psych ward:

  1. Zack Davis gave me a math test: could I prove ? I gave a geometric argument: ” means spinning radians clockwise about the origin in the complex plane starting from 1″, and I drew a corresponding diagram. Zack said this didn’t show I could do math, since I could have remembered it, and asked me to give an algebraic argument. I failed to give one (and I think I would have failed pre-psychosis as well). He told me that I should have used the Taylor series expansion of . I believe this exchange was used to convince other people taking care of me that I was unable to do math, which was unreasonable given the difficulty of the problem and the lack of calibration on an easier problem. This worsened communication in part by causing me to be more afraid that people would justify coercing me (and not trying to understand me) on the basis of my lack of reasoning ability. (Days later, I tested myself with programming “FizzBuzz” and was highly distressed to find that my program was malfunctioning and I couldn’t successfully debug it, with my two eyes seeming to give me different pictures of the computer screen.)

  2. I briefly talked with Michael Vassar (for less than an hour); he offered useful philosophical advice about basing my philosophy on the capacity to know instead of on the existence of fundamentally good or bad people, and made a medication suggestion (for my sleep issues) that turned out to intensify the psychosis in a way that he might have been able to predict had he thought more carefully, although I see that it was a reasonable off-the-cuff guess given the anti-anxiety properties of this medication.

  3. I felt like I was being “contained” and “covered up”, which included people not being interested in learning about where I was mentally. (Someone taking care of me confirms years later that, yes, I was being contained, and people were covering up the fact that there was a sick animal in the house). Ben Hoffman opened the door which let sunlight into the doorway. I took it as an invitation and stepped outside. The light was wonderful, giving me perhaps the most ecstatic experience I have had in my life, as I sensed light around my mind, and I felt relieved from being covered up. I expounded on the greatness of the sunlight, referencing Sarah’s post on Ra. Ben Hoffman encouraged me to pay more attention to my body, at which point the light felt like it concentrated into a potentially-dangerous sharp vertical spike going through my body. (This may technically be or have some relation to a Kundalini awakening, though I haven’t confirmed this; there was a moment around this time that I believe someone around me labeled as a “psychotic break”.). I felt like I was performing some sort of light ritual navigating between revelation and concealment, and subsequently believed I had messed up the ritual terribly and became ashamed. Sometime around then I connected what I saw due to the light with the word “dasein” (from Heidegger), and shortly afterward connected “dasein” to the idea that zero-sum games are normal, such as in sports. I later connected the light to the idea that everyone else is the same person as me (and I heard my friends’ voices in another room in a tone as if they were my own voice).

  4. I was very anxious and peed on a couch at some point and, when asked why, replied that I was “trying to make things worse”.

  5. I was in my bed, ashamed and still, staring at the ceiling, afraid that I would do something bad. Sarah Constantin sat on my bed and tried to interact with me, including by touching my fingers. I felt very afraid of interacting with her because I thought I was steering in the wrong direction (doing bad things because they are bad) and might hurt Sarah or others. I felt something behind my eyes and tongue turn inward as I froze up more and more, sabotaging my own ability to influence the world, becoming catatonic (a new mind-altering medication that was suggested to me at the time, different from the one Michael suggested, might also have contributed to the catatonia). Sarah noticed that I was breathing highly abnormally and called the ER. While the ambulance took me there I felt like I could only steer in the wrong direction, and feared that if I continued I might become a worse person than Adolf Hitler. Sarah came with me in the ambulance and stayed with me in the hospital room; after I got IV benzos, I unfroze. The hospital subsequently sent me home.

  6. One night I decided to open my window, jump out, and walk around town; I thought I was testing the hypothesis that things were very weird outside and the people in my house were separating me from the outside. I felt like I was bad and that perhaps I should walk towards water and drown, though this was not a plan I could have executed on. Ben Hoffman found me and walked me back home. Someone called my parents, who arrived the next day and took me to the ER (I was not asked if I wanted to be psychiatrically institutionalized); I was catatonic in the ER for about two days and was later moved to a psychiatric hospital.

While those who were taking care of me didn’t act optimally, the situation was incredibly confusing for myself and them, and I believe they did better than most other Berkeley rationalists would have done, who would themselves have done better than most members of the American middle class would have done.

Are Michael Vassar and friends pro-psychosis gnostics?

Scott asserts:

Jessica was (I don’t know if she still is) part of a group centered around a person named Vassar, informally dubbed “the Vassarites”. Their philosophy is complicated, but they basically have a kind of gnostic stance where regular society is infinitely corrupt and conformist and traumatizing and you need to “jailbreak” yourself from it (I’m using a term I found on Ziz’s discussion of her conversations with Vassar; I don’t know if Vassar uses it himself). Jailbreaking involves a lot of tough conversations, breaking down of self, and (at least sometimes) lots of psychedelic drugs.

I have only heard Michael Vassar use the word “jailbreak” when discussing Ziz, but he believes it’s possible to use psychedelics to better see deception and enhance one’s ability to use one’s own mind independently, which I find to be true in my experience. This is a common belief among people who take psychedelics, and psychotherapeutic organizations including MAPS and Johns Hopkins, which have published conventional academic studies demonstrating that psychedelic treatment regimens widely reported to induce “ego death” have strong psychiatric benefits. Michael Vassar believes “tough conversations” that challenge people’s defensive nonsense (some of which is identity-based) are necessary for psychological growth, in common with psychotherapists, and in common with some MIRI/​CFAR people such as Anna Salamon.

I had tried psychedelics before talking significantly with Michael, in part due to a statement I heard from a friend (who wasn’t a CFAR employee but who did some teaching at CFAR events) along the lines of “CFAR can’t legally recommend that you try [a specific psychedelic], but...” (I don’t remember what followed the “but”), and in part due to suggestions from other friends.

“Infinitely corrupt and conformist and traumatizing” is hyperbolic (infinite corruption would leave nothing to steal), though Michael Vassar and many of his friends believe large parts of normal society (in particular in the professional-managerial class) are quite corrupt and conformist and traumatizing. I mentioned in a comment on the post one reason why I am not sad that I worked at MIRI instead of Google:

I’ve talked a lot with someone who got pretty high in Google’s management hierarchy, who seems really traumatized (and says she is) and who has a lot of physiological problems, which seem overall worse than mine. I wouldn’t trade places with her, mental health-wise.

I have talked with other people who have worked in corporate management, who have corroborated that corporate management traumatizes people into playing zero-sum games. If Michael and I are getting biased samples here and high-level management at companies like Google is actually a fine place to be in the usual case, then that indicates that MIRI is substantially worse than Google as a place to work. Iceman in the thread reports that his experience as a T-5 (apparently a “Senior” non-management rank) at Google “certainly traumatized” him, though this was less traumatizing than what he gathers from Zoe Curzi’s or my reports, which may themselves be selected for being especially severe due to the fact that they are being written about. Moral Mazes, an ethnographic study of corporate managers written by sociology professor Robert Jackall, is also consistent with my impression.

Scott asserts that Michael Vassar treats borderline psychosis as an achievement:

The combination of drugs and paranoia caused a lot of borderline psychosis, which the Vassarites mostly interpreted as success (“these people have been jailbroken out of the complacent/​conformist world, and are now correctly paranoid and weird”).

A strong form of this is contradicted by Zack Davis’s comment:

As some closer-to-the-source counterevidence against the “treating as an achievement” charge, I quote a 9 October 2017 2:13 p.m. Signal message in which Michael wrote to me:

Up for coming by? I’d like to understand just how similar your situation was to Jessica’s, including the details of her breakdown. We really don’t want this happening so frequently.

(Also, just, whatever you think of Michael’s many faults, very few people are cartoon villains that want their friends to have mental breakdowns.)

A weaker statement is true: Michael Vassar believes that mental states somewhat in the direction of psychosis, such as ones had by family members of clinical schizophrenics, are likely to be more intellectually productive over time. This is not an especially concerning or absurd belief. Scott Alexander himself cites research showing greater mental modeling and verbal intelligence in relatives of schizophrenics:

In keeping with this theory, studies find that first-degree relatives of autists have higher mechanistic cognition, and first-degree relatives of schizophrenics have higher mentalistic cognition and schizotypy. Autists’ relatives tend to have higher spatial compared to verbal intelligence, versus schizophrenics’ relatives who tend to have higher verbal compared to spatial intelligence. High-functioning schizotypals and high-functioning autists have normal (or high) IQs, no unusual number of fetal or early childhood traumas, and the usual amount of bodily symmetry; low-functioning autists and schizophrenics have low IQs, increased history of fetal and early childhood traumas, and increased bodily asymmetry indicative of mutational load.

(He also mentions John Nash as a particularly interesting case of mathematical intelligence being associated with schizophrenic symptoms, in common with my own comparison of myself to John Nash earlier in this post.)

I myself prefer to be sub-clinically schizotypal (which online self-diagnosis indicates I am) to the alternative of being non-schizotypal, which I understand is not a preference shared by everyone. There is a disagreement between Michael Vassar and Scott Alexander about the tradeoffs involved, but they agree there are both substantial advantages and disadvantages to mental states somewhat in the direction of schizophrenia.

Is Vassar-induced psychosis a clinically significant phenomenon?

Scott Alexander draws a causal link between Michael Vassar and psychosis:

Since then, [Vassar has] tried to “jailbreak” a lot of people associated with MIRI and CFAR—again, this involves making them paranoid about MIRI/​CFAR and convincing them to take lots of drugs. The combination of drugs and paranoia caused a lot of borderline psychosis, which the Vassarites mostly interpreted as success (“these people have been jailbroken out of the complacent/​conformist world, and are now correctly paranoid and weird”). Occasionally it would also cause full-blown psychosis, which they would discourage people from seeking treatment for, because they thought psychiatrists were especially evil and corrupt and traumatizing and unable to understand that psychosis is just breaking mental shackles.

(to be clear: Michael Vassar and our mutual friends decided to place me in a psychiatric institution after I lost a week of sleep, which is at most a mild form of “discourag[ing] people from seeking treatment”; it is in many cases reasonable to try at-home treatment if it could prevent institutionalization.)

I have given an account in this post of the causality of my psychosis, in which Michael Vassar is relevant, and so are Eliezer Yudkowsky, Nate Soares, Anna Salamon, Sarah Constantin, Ben Hoffman, Zack Davis, Jack Gallagher, Bryce Hidysmith, Scott Alexander, Olivia Schaeffer, Alice Monday, Brian Tomasik, Venkatesh Rao, David Chapman, Carl Jung, M. Scott Peck, Martin Heidegger, Lao Tse, the Buddha, Jesus Christ, John Von Neumann, John Nash, and many others. Many of the contemporary people listed were/​are mutual friends of myself and Michael Vassar, which is mostly explained by myself finding these people especially helpful and interesting to talk to (correlated with myself and them finding Michael Vassar helpful and interesting to talk to), and Michael Vassar connecting us with each other.

Could Michael Vassar have orchestrated all this? That would be incredibly unlikely, requiring him to scheme so well that he determines the behavior of many others while having very little direct contact with me at the time of psychosis. If he is Xanatos, directing the entire social scene I was part of through hidden stratagems, that would be incredibly unlikely on priors, and far out of line with how effective I have seen him to be at causing people to cooperate with his intentions.

Other people who have had some amount of interaction with Michael Vassar and who have been psychotic commented in the thread. Devi Borg commented that the main contributor to her psychosis was “very casual drug use that even Michael chided me for”. Zack Davis commented that “Michael had nothing to do with causing” his psychosis.

Eric Bruylant commented that his thoughts related to Michael Vassar were “only one mid sized part of a much larger and weirder story...[his] psychosis was brought on by many factors, particularly extreme physical and mental stressors and exposure to various intense memes”, that “Vassar was central to my delusions, at the time of my arrest I had a notebook in which I had scrawled ‘Vassar is God’ and ‘Vassar is the Devil’ many times”; he only mentioned sparse direct contact with Michael Vassar himself, mentioning a conversation in which “[Michael] said my ‘pattern must be erased from the world’ in response to me defending EA”.

While on the surface Eric Bruylant seems to be most influenced by Michael Vassar out of any of the cases, the effect would have had to be indirect given his low amount of direct conversation with Michael, and he mentions an intermediary talking to both him and Michael. Anna Salamon’s hyperbolic statement that Michael is “the devil” may be causally related to Eric’s impressions of Michael especially given the scrawling of “Vassar is God” and “Vassar is the Devil”. It would be very surprising, showing an extreme degree of mental prowess, for Michael Vassar to be able to cause a psychotic break two hops out in the social graph through his own agency; it is much more likely that the vast majority of relevant agency was due to other people.

I have heard of 2 cases of psychosis in former MIRI employees in 2017-2021 who weren’t significantly talking with Michael or Ziz (I referenced one in my original post and have since then learned of another).

As I pointed out in a reply to Scott Alexander, if such strong mental powers are possible, that lends plausibility to the psychological models people at Leverage Research were acting on, in which people can spread harmful mental objects to each other. Scott’s comment that I reply to admits that attributing such strong psychological powers to Michael Vassar is “very awkward” for liberalism.

Such “liberalism” is hard for me to interpret in light of Scott’s commentary on my pre-psychosis speech:

Jessica is accusing MIRI of being insufficiently supportive to her by not taking her talk about demons and auras seriously when she was borderline psychotic, and comparing this to Leverage, who she thinks did a better job by promoting an environment where people accepted these ideas. I think MIRI was correct to be concerned and (reading between the lines) telling her to seek normal medical treatment, instead of telling her that demons were real and she was right to worry about them, and I think her disagreement with this is coming from a belief that psychosis is potentially a form of useful creative learning. While I don’t want to assert that I am 100% sure this can never be true, I think it’s true rarely enough, and with enough downside risk, that treating it as a psychiatric emergency is warranted.

[EDIT: I originally misinterpreted “it” in the last sentence as referring to “talk about demons and auras”, not “psychosis”, and the rest of this section is based on that incorrect assumption; Scott clarified that he meant the latter.]

I commented that this was effectively a restriction on my ability to speak freely, in contradiction with the liberal right to free speech. Given that a substantial fraction of the general public (e.g. New Age people and Christians, groups that overlap with psychiatrists) discuss “auras” and “demons”, it is inappropriate to treat such discussion as cause for a “psychiatric emergency”, a judgment substantially increasing the risk of involuntary institutionalization; that would be a case of a minority ideological community using the psychiatric system to enforce its local norms. If Scott were arguing that talk of “auras” and “demons” is a psychiatric emergency based on widely-accepted professional standards, he would need to name a specific DSM condition and argue that this talk constitutes symptoms of that condition.

In the context of MIRI, I was in a scientistic math cult ‘high-enthusiasm ideological community’, so seeing outside the ideology of this cult ‘community’ might naturally involve thinking about non-scientistic concepts; enforcing “talking about auras and demons is a psychiatric emergency” would, accordingly, be enforcing cult ‘local’ ideological boundaries using state force vested in professional psychiatrists for the purpose of protecting the public.

While Scott disclaims the threat of involuntary psychiatric institutionalization later in the thread, he did not accordingly update the original comment to clarify which statements he still endorses.

Scott has also attributed beliefs to me that I have never held or claimed to have held. I never asserted that demons are real. I do not think that it would have been helpful for people at MIRI to pretend that they thought demons were real. The nearest thing I can think of having said is that the hypothesis that “demons” were responsible for Eric Bruylant’s psychosis (a hypothesis offered by Eric Bruylant himself) might correspond to some real mental process worth investigating, and my complaint is that I and everyone else were discouraged from openly investigating such things and forming explicit hypotheses about them. It is entirely reasonable to be concerned about things conceptually similar to “demon possession” when someone has just attacked a mental health worker shortly after claiming to be possessed by a demon; discouraging such talk prevents people in situations like the one I was in from protecting their mental health by modeling threats to it.

Likewise, if someone had tried to explain why they disagreed with the specific things I said about auras (which did not include an assertion that they were “real,” only that they were not a noticeably more imprecise concept than “charisma”), that would have been a welcome and helpful response.

Scott Alexander has, at a Slate Star Codex meetup, said that Michael is a “witch” and/​or does powerful “witchcraft”. This is clearly of the same kind as speech about “auras” and “demons”. (The Sequences post on Occam’s Razor, relevantly, mentions “The lady down the street is a witch; she did it” as an example of a non-parsimonious explanation.)

I can’t believe that a standard against woo-adjacent language is being applied symmetrically given this and given that some other central rationalists such as Anna Salamon and multiple other CFAR employees used woo-adjacent language more often than I ever did.

Conclusion

I hope reading this gives a better idea of the actual causal factors behind my psychosis. While Scott Alexander’s comment contained some relevant information and prompted me to write this post with much more relevant information, the majority of his specific claims were false or irrelevant in context.

While much of what I’ve said about my workplace is negative (given that I am specifically focusing on what was stressing me out), there were, of course large benefits to my job: I was able to research very interesting philosophical topics with very smart and interesting people, while being paid substantially more than I could get in academia; I was learning a lot even while having confusing conflicts with my coworkers. I think my life has become more interesting as a result of having worked at MIRI, and I have strong reason to believe that working at MIRI was overall good for my career.

I will close by poetically expressing some of what I learned:

If you try to have thoughts,

You’ll be told to think for the common good;

If you try to think for the common good,

You’ll be told to serve a master;

If you try to serve a master,

Their inadequacy will disappoint you;

If their inadequacy disappoints you,

You’ll try to take on the responsibility yourself;

If you try to take on the responsibility yourself,

You’ll fall to the underworld;

If you fall to the underworld,

You’ll need to think to benefit yourself;

If you think to benefit yourself,

You’ll ensure that you are counted as part of “the common good”.

Postscript

Eliezer’s comment in support of Scott’s criticism was a reply to Aella saying he shared her (negative) sense about my previous post. If an account by Joshin is correct, we have textual evidence about this sense:

As regards Leverage: Aella recently crashed a party I was attending. This, I later learned, was the day that Jessica Taylor’s post about her experiences at CFAR and MIRI came out. When I sat next to her, she was reading that post. What follows is my recollection of our conversation.

Aella started off by expressing visible, audible dismay at the post. “Why is she doing this? This is undermining my frame. I’m trying to do something and she’s fucking it up.”

I asked her: “why do you do this?”

She said: “because it feels good. It feels like mastery. Like doing a good work of art or playing an instrument. It feels satisfying.”

I said: “and do you have any sense of whether what you’re doing is good or not?”

She said: “hahaha, you and Mark Lippmann both have the ‘good’ thing, I don’t really get it.”

I said: “huh, wow. Well, hey, I think your actions are evil; but on the other hand, I don’t believe everything I think.”

She said: “yeah, I don’t really mind being the evil thing. Seems okay to me.”

[EDIT: See Aella’s response; she says she didn’t say the line about undermining frames, and that use of the term “evil” has more context, and that the post overall was mostly wrong. To disambiguate her use of “evil”, I’ll quote the relevant part of her explanatory blog post below.]

I entered profound silence, both internal and external. I lost the urge to evangelize, my inner monologue left me, and my mind was quiet and slow-moving, like water. I inhabited weird states; sometimes I would experience a rapid vibration between the state of ‘total loss of agency’ and ‘total agency over all things’. Sometimes I experienced pain as pleasure, and pleasure as pain, like a new singular sensation for which there were no words at all. Sometimes time came to me viscerally, like an object in front of me I could nearly see except it was in my body, rolling in this fast AND-THIS-AND-THIS motion, and I would be destroyed and created by it, like my being was stretched on either side and brought into existence by the flipping in between. I cried often.

I became sadistic. I’d previously been embracing a sort of masochism – education in the pain, fearlessness of eternal torture or whatever – but as my identity expanded to include that which was educating me, I found myself experiencing sadism. I enjoyed causing pain to myself, and with this I discovered evil. I found within me every murderer, torturer, destroyer, and I was shameless. As I prostrated myself on the floor, each nerve ending of my mind writhing with the pain of mankind, I also delighted in subjecting myself to it, in being it, in causing it. I became unified with it.

The evil was also subsumed by, or part of, love. Or maybe not “love” – I’d lost the concept of love, where the word no longer attached to a particular cluster of sense in my mind. The thing in its place was something like looking, where to understand something fully meant accepting it fully. I loved everything because I Looked at everything. The darkness felt good because I Looked at it. I was complete in my pain only when I experienced the responsibility for inducing that pain.