Short version is: could the rationalist community have handled these things better (even if we magically knew a decade ago that something like this would happen, but we wouldn’t know the specific names)? Is there a lesson to learn, or is it just bad luck that sometimes if you have a workshop, a future serial killer will participate?
It seems that we are not responsible for Ziz existing, or coming to our workshop, or coming up with a crazy theory that allowed them to create a murderous cult.
But it is our mistake that we didn’t stand firmly against drugs, didn’t pay more attention to the dangers of self-experimenting, and didn’t kick out Ziz sooner.
When your online blog transforms to an offline community, you have to take responsibility for people’s safety, even if that means doing things that will be unpopular with some contrarians. Otherwise, bad things are going to happen; it is just a question of time. As they say, the safety regulations are written in blood.
I described how I felt like I was the only one with my values in a world of flesh eating monsters, how it was horrifying seeing the amoral bullet biting consistency of the rationality community, where people said it was okay to eat human babies as long as they weren’t someone else’s property if I compared animals to babies. How I was constantly afraid that their values would leak into me and my resolve would weaken and no one would be judging futures according to sentient beings in general. How it was scary Eliezer Yudkowsky seemed to use “sentient” to mean “sapient”. How I was constantly afraid if I let my brain categorize them as my “in-group” then I’d lose my values.
This is one among several top hypothesis-parts for something at the core of how Ziz, and by influence other Zizians, gets so far gone from normal structures of relating. It is indeed true that normal people (I mean, including the vast majority of rationalists) live deep in an ocean of {algorithm, stance, world, god}-sharing with people around them. And it’s true that this can infect you in various ways, erode structures in you, erode values. So you can see how someone might think it’s a good idea to become oppositional to many normal structures of relating; and how that can be in a reinforcing feedback loop with other people’s reactions to your oppositionality and rejection.
(As an example of another top hypothesis-part: I suspect Ziz views betrayal of values (the blackmail payout thing) and betrayal of trust (the behavior of Person A) sort of as described here: https://sideways-view.com/2016/11/14/integrity-for-consequentialists/ In other words, if someone’s behavior is almost entirely good, but then in some subtle sneaky ways or high-stakes ways bad, that’s an extreme mark against them. Many would agree with the high-stakes part of (my imagined) Ziz’s stance here, but many fewer would agree so strongly with the subtle sneaky part.)
If that’s a big part of what was going on, it poses a general question (which is partly a question of community behavior and partly a question of mental technology for individuals): How to make it more feasible to get the goods of being in a community, without the bads of value erosion?
There is a 4D chess explanation for everything. But to me it seems like rationalization. I strongly suspect that if we tried to write down all this Insanity-Wolf ethics, it would turn out that Ziz actually did not follow it consistently, only when convenient. (Just like e.g. the Christians who insist that you have to follow the Bible no matter what… until they get to the chapter where it tells you to sell all your property and donate the money to the poor… at which point they switch to following the common sense instead.) For example, if paying a blackmailer is an unforgivable crime, how come they paid the 10k bail to get Ziz out of jail? And if betraying trust is so bad, how come they didn’t want to pay the rent (but also didn’t leave)?
Also, the entire theory about Ziz being double-good and all her followers single-good, is so blatantly self-serving, that you probably need to have half of your brain sleeping in order to miss that.
the blackmail payout thing
By the way, is there an explanation somewhere what actually happened? (Not just what Ziz believed.)
Just FYI none of what you said responds to anything I said, AFAICT. Are you just arguing “Ziz is bad”? My comment is about what causes people to end up the way Ziz ended up, which is relevant to your question “Is there a lesson to learn?”.
By the way, is there an explanation somewhere what actually happened? (Not just what Ziz believed.)
Thanks for the link! So it’s about that “miricult” website.
Now I feel like rationality itself is an infohazard. I mean, rationality itself won’t hurt you if you are sufficiently sane, but if you start talking about it, insufficiently sane people will listen, too. And that will have horrible consequences. (And when I try to find a way to navigate around this, such as talking openly only to certifiably sane people, that seems like the totally cultish thing to do.)
@PhilGoetz’s Reason as memetic immune disorder seems relevant here. It has been noted many times that engineers are disproportionately involved in terrorism, in ways that the mere usefulness of their engineering skills can’t explain.
Perhaps there should be some “pre-rationality” lessons. Something stabilizing you need to learn first, so that learning about rationality does not make you crazy.
There are some materials that already seem to point in that direction: adding up to normality, ethical injunctions. Perhaps the CFAR workshops should start with focusing on these things, in a serious way (like, spend at least one day only debating this, check that the participants understood the lesson, and maybe kick out those who didn’t?).
Because, although some people get damaged by learning about rationality, it seems to me that many people don’t (some of them only because they don’t change in any significant way, but some of them internalize the lessons in a good way). If we could predict who would end up which way, that could allow us to reduce the damage, while still delivering the value.
Of course this only applies to the workshops; online communication is a different questions. But seems to me that the bad things mostly happen offline.
Now I feel like rationality itself is an infohazard. I mean, rationality itself won’t hurt you if you are sufficiently sane, but if you start talking about it, insufficiently sane people will listen, too. And that will have horrible consequences. (And when I try to find a way to navigate around this, such as talking openly only to certifiably sane people, that seems like the totally cultish thing to do.)
There is an alternative way, the other extreme: get more and more rationalists. If the formed communities do not share the moral inclinations of LW community, those might form some new coordination structures[1]; if we don’t draw from the circles of desperate, those structures will tend to benefit others as well (and, on the other hand, having a big proportion of very unsatisfied people would naturally start a gang or overthrow whatever institutions are around).
(It’s probably worth exploring in a separate post?)
I claim non-orthogonality between goals and means in this case. For some community with altruistic people, its structures require learning a fair bit about people’s values. For a group which wants tech companies to focus on consumers’ quality-of-life more than currently, not so.
From my experience, the rationality community in Vienna does not share any of the craziness in Bay Area that I read about, so yeah, it seems plausible that different communities will end up significantly different.
I think there is a strong founder effect… the new members will choose whether they join or not depending on how comfortable they feel among the existing members. Decisions like “we have these rules / we don’t have any rules”, “there are people responsible for organization and safety / everyone needs to take care of themselves” once established, easily become “the way this is done here”.
But you are also limited by the pool you are recruiting the potential new members from. Could be, there are simply not enough people to make a local rationality community. Could be, the local memes are so strong (e.g. positive attitude towards drug use, or wokeness) that in practice you cannot push against them without actively rejecting most of wannabe members, which would be a weird dynamic. (You already need to push strongly against people who simply do not get what rationality means, but are trying to join anyway.)
But it is our mistake that we didn’t stand firmly against drugs, didn’t pay more attention to the dangers of self-experimenting, and didn’t kick out Ziz sooner.
These don’t seem like very relevant or very actionable takeways.
we didn’t stand firmly against drugs—Maybe this would have been a good move generally, but it wouldn’t have helped with this situation at all. Ziz reports that they don’t take psychedelics, and I believe that extends to her compatriots, as well.
didn’t pay more attention to the dangers of self-experimenting—What does this mean concretely? I think plenty of people did “pay attention” to the dangers of self experimenting. But “paying attention” doesn’t automatically address those dangers.
What specific actions would you recommend by which people? Eliezer telling people not to self experiment? CFAR telling people not to self experiment? A blanket ban on “self experimentation” is clearly too broad (“just don’t ever try anything that seems like maybe a good idea to you on first principles”). Some more specific guidelines might have helped, but we need to actually delineate the specific principles.
didn’t kick out Ziz sooner—When specifically is the point when Ziz should have been kicked out of the community? With the benefit of hindsight bias, we can look back and wish we had separated sooner, but that was not nearly as clear ex ante.
What should have been the trigger? When she started wearing black robes? When she started calling herself Ziz? When she started writing up her own homegrown theories of psychology? Weird clothes, weird names, and weird beliefs are part and parcel of the rationalist milieu.
As it is, she was banned from the alumni reunion at which she staged the failed protest (she bought tickets in advance, CFAR told her that she was uninvited, and returned her money). Before that, I think that several community leaders had grey-listed her as someone not to invite to events. Should something else have happened, in addition to that? Should she have been banned from public events or private group houses entirely? On what basis? On who’s authority?
What should have been the trigger? When she started wearing black robes? When she started calling herself Ziz? When she started writing up her own homegrown theories of psychology? Weird clothes, weird names, and weird beliefs are part and parcel of the rationalist milieu.
FWIW, I think I had triggers around them being weird/sketchy that would now cause me to exclude them from many community things, so I do think there were concrete triggers, and I did update on that.
After the shootout, investigators who searched the car reported finding a cache of tactical gear, including a ballistic helmet, a night-vision device, face respirators, two-way radios and dozens of hollow-point bullets. They also located Youngblut’s journal, which according to prosecutors contained “cypher text” and writings about her psychedelic experiences.
“This lsd trip seems pretty mellow,” she allegedly wrote. “i fell kinda high vibrationy maybe more so than other lsd trips?”
I wasn’t there, so who knows how I would have reacted, it probably looks different in hindsight, but it seems like there were already red flags, some people noticed them, and others ignored them:
Salamon told Open Vallejo that LaSota attended three CFAR events between 2014 and 2018. Concerned by their “weird” behavior and interactions with other CFAR attendees, Salamon tried to convince a joint admissions committee between the Machine Intelligence Learning Institute and CFAR to not admit LaSota into their month-long summer fellowship in 2018. Salamon, however, was overruled.
“When LaSota attended the final program in summer 2018, I was physically afraid in a way I’ve never been with anyone else,” Salamon said in an email to Open Vallejo.
Salamon was concerned with some of the ideas that LaSota talked about during the workshops and with her in private. They included theories on “hemispheric sleep,” in which LaSota claimed that humans can split their consciousness between two sides of the brain, allowing one side to sleep while the other is awake, she said. In addition, these two sides of the brain may be “good,” “evil,” or both.
But it is our mistake that we didn’t stand firmly against drugs, didn’t pay more attention to the dangers of self-experimenting, and didn’t kick out Ziz sooner.
Ziz is actually straight edge, she was super paranoid about drugs messing with her or leaving her in a less functional state. Also like, imo? kicking Ziz out sooner wouldn’t have helped, if anything it would have exacerbated the issue and possibly just brought things to a head faster. you can’t just inflict severe trauma on someone and wash your hands of them, eventually that will come back to bite you.
you can’t just inflict severe trauma on someone and wash your hands of them, eventually that will come back to bite you.
Could you please clarify what do you mean in this context by “inflicting severe trauma”? Like, learning about timeless decision theory? (At the CFAR workshop, or would reading the Sequences online already qualify as inflicting trauma?)
If CFAR workshops were inflicting trauma on Ziz, then… more workshops mean more trauma? (Or don’t they? How should CFAR predict which workshops will have a traumatizing effect and which ones will be okay? Especially for a person that seems unusual, because hundreds of others have participated at the workshops without being traumatized by them.) So it’s like “if you inflict trauma on someone, you can’t just stop inflicting more trauma on them”?
it would have exacerbated the issue and possibly just brought things to a head faster
This seems to match patterns like “you can’t just break up with an abusive boyfriend, because that would escalate the situation and he might seriously hurt you”. Like, maybe yes, but what is the proposed alternative, because obviously “doing more of the same” seems to only make things worse, albeit more slowly in short term.
(I don’t think that the analogy is improper here, considering that Zizians have actually hurt people—that’s why are having the debate here—and vindictiveness seems like a central component of their ideology.)
Possible answer: When CFAR notices that someone seems reacting really badly on their workshop, they should offer to pay them a therapy? (And increase the costs of workshop, as a health insurance.) Do you think there is a chance Ziz would have accepted?
I wrote a shortform about Zizians, before I noticed this thread.
Short version is: could the rationalist community have handled these things better (even if we magically knew a decade ago that something like this would happen, but we wouldn’t know the specific names)? Is there a lesson to learn, or is it just bad luck that sometimes if you have a workshop, a future serial killer will participate?
It seems that we are not responsible for Ziz existing, or coming to our workshop, or coming up with a crazy theory that allowed them to create a murderous cult.
But it is our mistake that we didn’t stand firmly against drugs, didn’t pay more attention to the dangers of self-experimenting, and didn’t kick out Ziz sooner.
When your online blog transforms to an offline community, you have to take responsibility for people’s safety, even if that means doing things that will be unpopular with some contrarians. Otherwise, bad things are going to happen; it is just a question of time. As they say, the safety regulations are written in blood.
There’s a lot more complexity, obviously, but one thing that sticks out to me is this paragraph, from https://sinceriously.blog-mirror.com/net-negative/ :
This is one among several top hypothesis-parts for something at the core of how Ziz, and by influence other Zizians, gets so far gone from normal structures of relating. It is indeed true that normal people (I mean, including the vast majority of rationalists) live deep in an ocean of {algorithm, stance, world, god}-sharing with people around them. And it’s true that this can infect you in various ways, erode structures in you, erode values. So you can see how someone might think it’s a good idea to become oppositional to many normal structures of relating; and how that can be in a reinforcing feedback loop with other people’s reactions to your oppositionality and rejection.
(As an example of another top hypothesis-part: I suspect Ziz views betrayal of values (the blackmail payout thing) and betrayal of trust (the behavior of Person A) sort of as described here: https://sideways-view.com/2016/11/14/integrity-for-consequentialists/ In other words, if someone’s behavior is almost entirely good, but then in some subtle sneaky ways or high-stakes ways bad, that’s an extreme mark against them. Many would agree with the high-stakes part of (my imagined) Ziz’s stance here, but many fewer would agree so strongly with the subtle sneaky part.)
If that’s a big part of what was going on, it poses a general question (which is partly a question of community behavior and partly a question of mental technology for individuals): How to make it more feasible to get the goods of being in a community, without the bads of value erosion?
There is a 4D chess explanation for everything. But to me it seems like rationalization. I strongly suspect that if we tried to write down all this Insanity-Wolf ethics, it would turn out that Ziz actually did not follow it consistently, only when convenient. (Just like e.g. the Christians who insist that you have to follow the Bible no matter what… until they get to the chapter where it tells you to sell all your property and donate the money to the poor… at which point they switch to following the common sense instead.) For example, if paying a blackmailer is an unforgivable crime, how come they paid the 10k bail to get Ziz out of jail? And if betraying trust is so bad, how come they didn’t want to pay the rent (but also didn’t leave)?
Also, the entire theory about Ziz being double-good and all her followers single-good, is so blatantly self-serving, that you probably need to have half of your brain sleeping in order to miss that.
By the way, is there an explanation somewhere what actually happened? (Not just what Ziz believed.)
Just FYI none of what you said responds to anything I said, AFAICT. Are you just arguing “Ziz is bad”? My comment is about what causes people to end up the way Ziz ended up, which is relevant to your question “Is there a lesson to learn?”.
Somewhere on this timeline I think https://x.com/jessi_cata/with_replies
Thanks for the link! So it’s about that “miricult” website.
Now I feel like rationality itself is an infohazard. I mean, rationality itself won’t hurt you if you are sufficiently sane, but if you start talking about it, insufficiently sane people will listen, too. And that will have horrible consequences. (And when I try to find a way to navigate around this, such as talking openly only to certifiably sane people, that seems like the totally cultish thing to do.)
@PhilGoetz’s Reason as memetic immune disorder seems relevant here. It has been noted many times that engineers are disproportionately involved in terrorism, in ways that the mere usefulness of their engineering skills can’t explain.
Teaching rationality the shallow way—nope; knowing about biases can hurt people
Teaching rationality the deep way—nope; reason as a memetic immune disorder
:(
Perhaps there should be some “pre-rationality” lessons. Something stabilizing you need to learn first, so that learning about rationality does not make you crazy.
There are some materials that already seem to point in that direction: adding up to normality, ethical injunctions. Perhaps the CFAR workshops should start with focusing on these things, in a serious way (like, spend at least one day only debating this, check that the participants understood the lesson, and maybe kick out those who didn’t?).
Because, although some people get damaged by learning about rationality, it seems to me that many people don’t (some of them only because they don’t change in any significant way, but some of them internalize the lessons in a good way). If we could predict who would end up which way, that could allow us to reduce the damage, while still delivering the value.
Of course this only applies to the workshops; online communication is a different questions. But seems to me that the bad things mostly happen offline.
There is an alternative way, the other extreme: get more and more rationalists.
If the formed communities do not share the moral inclinations of LW community, those might form some new coordination structures[1]; if we don’t draw from the circles of desperate, those structures will tend to benefit others as well (and, on the other hand, having a big proportion of very unsatisfied people would naturally start a gang or overthrow whatever institutions are around).
(It’s probably worth exploring in a separate post?)
I claim non-orthogonality between goals and means in this case. For some community with altruistic people, its structures require learning a fair bit about people’s values. For a group which wants tech companies to focus on consumers’ quality-of-life more than currently, not so.
From my experience, the rationality community in Vienna does not share any of the craziness in Bay Area that I read about, so yeah, it seems plausible that different communities will end up significantly different.
I think there is a strong founder effect… the new members will choose whether they join or not depending on how comfortable they feel among the existing members. Decisions like “we have these rules / we don’t have any rules”, “there are people responsible for organization and safety / everyone needs to take care of themselves” once established, easily become “the way this is done here”.
But you are also limited by the pool you are recruiting the potential new members from. Could be, there are simply not enough people to make a local rationality community. Could be, the local memes are so strong (e.g. positive attitude towards drug use, or wokeness) that in practice you cannot push against them without actively rejecting most of wannabe members, which would be a weird dynamic. (You already need to push strongly against people who simply do not get what rationality means, but are trying to join anyway.)
These don’t seem like very relevant or very actionable takeways.
we didn’t stand firmly against drugs—Maybe this would have been a good move generally, but it wouldn’t have helped with this situation at all. Ziz reports that they don’t take psychedelics, and I believe that extends to her compatriots, as well.
didn’t pay more attention to the dangers of self-experimenting—What does this mean concretely? I think plenty of people did “pay attention” to the dangers of self experimenting. But “paying attention” doesn’t automatically address those dangers.
What specific actions would you recommend by which people? Eliezer telling people not to self experiment? CFAR telling people not to self experiment? A blanket ban on “self experimentation” is clearly too broad (“just don’t ever try anything that seems like maybe a good idea to you on first principles”). Some more specific guidelines might have helped, but we need to actually delineate the specific principles.
didn’t kick out Ziz sooner—When specifically is the point when Ziz should have been kicked out of the community? With the benefit of hindsight bias, we can look back and wish we had separated sooner, but that was not nearly as clear ex ante.
What should have been the trigger? When she started wearing black robes? When she started calling herself Ziz? When she started writing up her own homegrown theories of psychology? Weird clothes, weird names, and weird beliefs are part and parcel of the rationalist milieu.
As it is, she was banned from the alumni reunion at which she staged the failed protest (she bought tickets in advance, CFAR told her that she was uninvited, and returned her money). Before that, I think that several community leaders had grey-listed her as someone not to invite to events. Should something else have happened, in addition to that? Should she have been banned from public events or private group houses entirely? On what basis? On who’s authority?
FWIW, I think I had triggers around them being weird/sketchy that would now cause me to exclude them from many community things, so I do think there were concrete triggers, and I did update on that.
From https://www.sfchronicle.com/bayarea/article/ziz-lasota-zizians-rationalism-20063671.php:
I wasn’t there, so who knows how I would have reacted, it probably looks different in hindsight, but it seems like there were already red flags, some people noticed them, and others ignored them:
-- ‘Zizian’ namesake who faked death in 2022 is wanted in two states
Ziz is actually straight edge, she was super paranoid about drugs messing with her or leaving her in a less functional state. Also like, imo? kicking Ziz out sooner wouldn’t have helped, if anything it would have exacerbated the issue and possibly just brought things to a head faster. you can’t just inflict severe trauma on someone and wash your hands of them, eventually that will come back to bite you.
Thank you for the info.
Could you please clarify what do you mean in this context by “inflicting severe trauma”? Like, learning about timeless decision theory? (At the CFAR workshop, or would reading the Sequences online already qualify as inflicting trauma?)
If CFAR workshops were inflicting trauma on Ziz, then… more workshops mean more trauma? (Or don’t they? How should CFAR predict which workshops will have a traumatizing effect and which ones will be okay? Especially for a person that seems unusual, because hundreds of others have participated at the workshops without being traumatized by them.) So it’s like “if you inflict trauma on someone, you can’t just stop inflicting more trauma on them”?
This seems to match patterns like “you can’t just break up with an abusive boyfriend, because that would escalate the situation and he might seriously hurt you”. Like, maybe yes, but what is the proposed alternative, because obviously “doing more of the same” seems to only make things worse, albeit more slowly in short term.
(I don’t think that the analogy is improper here, considering that Zizians have actually hurt people—that’s why are having the debate here—and vindictiveness seems like a central component of their ideology.)
Possible answer: When CFAR notices that someone seems reacting really badly on their workshop, they should offer to pay them a therapy? (And increase the costs of workshop, as a health insurance.) Do you think there is a chance Ziz would have accepted?
Here’s a way to find out. (Perhaps unrealistic/intractable (IDK) but it is a way to find out.)
Research the number of malefactors of Ziz type/magnitude per 1,000 active members, across various communities/movements.
Identify positive outliers: communities that have very below average malefactor-to-active-member ratio.
Identify what accounts for this.
If this is anything that can be replicated, replicate.