I’m not sure what writing this comment felt like for you, but from my view it seems like you’ve noticed a lot of the dynamics about scapegoating and info-suppression fields that Ben and Jessica have blogged about in the past (and occasionally pointed out in the course of these comments, though less clearly). I’m going to highlight a few things.
I do think that Jessica writing this post will predictably have reputational externalities that I don’t like and I think are unjustified.
Broadly, I think that onlookers not paying much attention would have concluded from Zoe’s post that Leverage is a cult that should be excluded from polite society, and hearing of both Zoe’s and Jessica’s post, is likely to conclude that Leverage and MIRI are similarly bad cults.
I totally agree with this. I also think that to the degree to which an “onlooker not paying much attention” concludes this is the degree to which they are habituated to engaging with discussion of wrongdoing as scapegoating games. This seems to be very common (though incredibly damaging) behavior. Scapegoating works on the associative/impressionistic logic of “looks”, and Jessica’s post certainly makes CFAR/MIRI “look” bad. This post can be used as “material” or “fuel” for scapegoating, regardless of whether Jessica’s intent in writing it. Though it can’t be used honestly to scapegoat (if there even is such a thing). Anyone using this piece to scapegoat needs to ignore the giant upfront paragraph about “HEY, DON’T USE THIS TO SCAPEGOAT”, and has no plausible claim to doing justice, upholding rules, or caring about the truth of the matter in any important relevant sense.
(aside, from both my priors on Jess and my reading of the post it was clear to me that Jess wasn’t trying to scapegoat CFAR/MIRI. It also simply isn’t in Jess’s interests for them to be scapegoated)
Another thought: CFAR/MIRI already “look” crazy to most people who might check them out. UFAI, cryonics, acausal trade, are all things that “look” crazy. And yet we’re all able to talk about them on LW without worry about “how it looks” because over many many conversations, many sequences, blog posts, comments, etc have created a community with different common knowledge about what will result in people ganging up on you.
Something that we as a community don’t talk a lot about is power structures, coercion, emotional abuse, manipulation, etc. We don’t collectively build and share models on their mechanics and structure. As such, I think it’s expected that when “things get real” people abandon commitment to the truth in favor of “oh shit, there’s an actual conflict, I or others could be scapegoated, I am not safe, I need to protect my people from being scapegoated at all cost”.
However, I think that we mostly shouldn’t be in the business of trying to carter to bystanders who are not invested in understanding what is actually going on in detail, and we especially should not compromise the discourse of people who are invested in understanding.
I totally agree, and I think if you explore this sense you already sorta see how commitment to making sure things “look okay” quickly becomes a commitment to suppress information about what happened.
(aside, these are some of Ben’s post that have been most useful to me for understanding some of this stuff)
I appreciate this comment, especially that you noticed the giant upfront paragraph that’s relevant to the discussion :)
One note on reputational risk: I think I took reasonable efforts to reduce it, by emailing a draft to people including Anna Salamon beforehand. Anna Salamon added Matt Graves (Vaniver) to the thread, and they both said they’d be happy with me posting after editing (Matt Graves had a couple specific criticisms of the post). I only posted this on LW, not on my blog or Medium. I didn’t promote it on Twitter except to retweet someone who was already tweeting about it. I don’t think such reputation risk reduction on my part was morally obligatory (it would be really problematic to require people complaining about X organization to get approval from someone working at that organization), just possibly helpful anyway.
Spending more than this amount of effort managing reputation risks would seriously risk important information not getting published at all, and too little of that info being published would doom the overall ambitious world-saving project by denying it relevant knowledge about itself. I’m not saying I acted optimally, just, I don’t see the people complaining about this making a better tradeoff in their own actions or advising specific policies that would improve the tradeoff.
Anyone using this piece to scapegoat needs to ignore the giant upfront paragraph about “HEY, DON’T USE THIS TO SCAPEGOAT”
I think that’s literally true, but also they way you wrote this sentence implies that that is unusual or uncommon.
I think that’s backwards. If a person was intentionally and deliberately motivated to scapegoat some other person or group, it is an effective rhetorical move to say “I’m not trying to punish them, I just want to talk freely about some harms.”
By pretending that you’re not attacking the target, you protect yourself somewhat from counter attack. Now you can cause reputational damage, and if people try to punish you for doing that, you can retreat to the Motte of “but I was just trying to talk about what’s going on. I specifically said not to punish any one!”
and has no plausible claim to doing justice, upholding rules, or caring about the truth of the matter in any important relevant sense.
This also seems to strong too me. I expect that many movement EAs will read the post Zoe’s and think “well, that’s enough information for me to never have anything to do with Geoff or Leverage.” This isn’t because they’re not interested in justice, it’s because they don’t have time time or the interest to investigate every allegation, so they’re using some rough heuristics and policies such as “if something looks sufficiently like a dangerous cult, don’t even bother giving it the benefit of the doubt.”
When I was drafting my comment, the original version of the text you first quoted was, “Anyone using this piece to scapegoat needs to ignore the giant upfront paragraph about ‘HEY DON’T USE THIS TO SCAPEGOAT’ (which people are totally capable of ignoring)”, guess I should have left that in there. I don’t think it’s uncommon to ignore such disclaimers, I do think it actively opposes behaviors and discourse norms I wish to see in the world.
I agree that putting a “I’m not trying to blame anyone” disclaimer can be a pragmatic rhetorical move for someone attempting to scapegoat. There’s an alternate timeline version of Jessica that wrote this post as a well crafted, well defended rhetorical attack, where the literal statements in the post all clearly say “don’t fucking scapegoat anyone, you fools” but all the associative and impressionistic “dark implications” (Vaniver’s language) say “scapegoat CFAR/MIRI!” I want to draw your attention to the fact that for a potential dark implication to do anything, you need people who can pick up that signal. For it to be an effective rhetorical move, you need a critical mass of people who are well practiced in ignoring literal speech, who understand on some level that the details don’t matter, and are listening in for “who should we blame?”
To be clear, I think there is such a critical mass! I think this is very unfortunate! (though not awkward, as Scott put it) There was a solid 2+ days where Scott and Vaniver’s insistence on this being a game of “Scapegoat Vassar vs scapegoat CFAR/MIRI” totally sucked me in, and instead of reading the contents of anyone’s comments I was just like “shit, who’s side do I join? How bad would it be if people know I hung out with Vassar once? I mean I really loved my time at CFAR, but I’m also friends with Ben and Jess. Fuck, but I also think Eli is a cool guy! Shit!” That mode of thinking I engaged in is a mode that can’t really get me what I want, which is larger and larger groups of people that understand scapegoating dynamics and related phenomena.
This also seems to strong to me. I expect that many movement EAs will read the post Zoe’s and think “well, that’s enough information for me to never have anything to do with Geoff or Leverage.” This isn’t because they’re not interested in justice, it’s because they don’t have time time or the interest to investigate every allegation, so they’re using some rough heuristics and policies such as “if something looks sufficiently like a dangerous cult, don’t even bother giving it the benefit of the doubt.”
Okay, I think my statement was vague enough to be mistaken for a statement I think is too strong. Though I expect you might consider my clarification too strong as well :)
I was thinking about the “in any way that matters” part. I can see how that implies a sort of disregard for justice that spans across time. Or more specifically, I can see how you would think it implies that certain conversations you’ve had with EA friends were impossible, or that they were lying/confabulating the whole convo, and you don’t think that’s true. I don’t think that’s the case either. I’m thinking about it as more piece-wise behavior. One will sincerely care about justice, but in that moment where they read Jess’s post, ignore the giant disclaimer about scapegoating, and try to scapegoat MIRI/CFAR/Leverage, in that particular moment the cognitive processes generating their actions aren’t aligned with justice, and are working against it. Almost like an “anti-justice traumatic flashback” but most of the time it’s much more low-key and less intense than what you will read about in the literature on flashback. Malcolm Ocean does a great job of describing this sort of “falling into a dream” in his post Dream Mashups (his post is not about scapegoating, its about ending up running a cognitive algo that hurts you without noticing).
To be clear, I not saying such behavior is contemptible, blameworthy, bad, or to-be-scapegoated. I am saying it’s very damaging, and I want more people to understand how it works. I want to understand how it works more. I would love to not get sucked into as many anti-justice dreams where I actively work against creating the sort of world I want to live in.
So when I said “not aligned with justice in any important relevant way”, that was more a statement about “how often and when will people fall into these dreams?” Sorta like the concept of “fair weather friend”, my current hunch is that people fall into scapegoating behavior exactly when it would be most helpful for them to not. While reading a post about “here’s some problems I see in this institution that is at the core of our community” is exactly when it is most important for one’s general atemporal commitment to justice to be present in one’s actual thoughts and actions.
I’m not sure what writing this comment felt like for you, but from my view it seems like you’ve noticed a lot of the dynamics about scapegoating and info-suppression fields that Ben and Jessica have blogged about in the past (and occasionally pointed out in the course of these comments, though less clearly). I’m going to highlight a few things.
I totally agree with this. I also think that to the degree to which an “onlooker not paying much attention” concludes this is the degree to which they are habituated to engaging with discussion of wrongdoing as scapegoating games. This seems to be very common (though incredibly damaging) behavior. Scapegoating works on the associative/impressionistic logic of “looks”, and Jessica’s post certainly makes CFAR/MIRI “look” bad. This post can be used as “material” or “fuel” for scapegoating, regardless of whether Jessica’s intent in writing it. Though it can’t be used honestly to scapegoat (if there even is such a thing). Anyone using this piece to scapegoat needs to ignore the giant upfront paragraph about “HEY, DON’T USE THIS TO SCAPEGOAT”, and has no plausible claim to doing justice, upholding rules, or caring about the truth of the matter in any important relevant sense.
(aside, from both my priors on Jess and my reading of the post it was clear to me that Jess wasn’t trying to scapegoat CFAR/MIRI. It also simply isn’t in Jess’s interests for them to be scapegoated)
Another thought: CFAR/MIRI already “look” crazy to most people who might check them out. UFAI, cryonics, acausal trade, are all things that “look” crazy. And yet we’re all able to talk about them on LW without worry about “how it looks” because over many many conversations, many sequences, blog posts, comments, etc have created a community with different common knowledge about what will result in people ganging up on you.
Something that we as a community don’t talk a lot about is power structures, coercion, emotional abuse, manipulation, etc. We don’t collectively build and share models on their mechanics and structure. As such, I think it’s expected that when “things get real” people abandon commitment to the truth in favor of “oh shit, there’s an actual conflict, I or others could be scapegoated, I am not safe, I need to protect my people from being scapegoated at all cost”.
I totally agree, and I think if you explore this sense you already sorta see how commitment to making sure things “look okay” quickly becomes a commitment to suppress information about what happened.
(aside, these are some of Ben’s post that have been most useful to me for understanding some of this stuff)
Blame Games
Can Crimes Be Discussed Literally?
Judgement, Punishment, and Information-Supression Fields
I appreciate this comment, especially that you noticed the giant upfront paragraph that’s relevant to the discussion :)
One note on reputational risk: I think I took reasonable efforts to reduce it, by emailing a draft to people including Anna Salamon beforehand. Anna Salamon added Matt Graves (Vaniver) to the thread, and they both said they’d be happy with me posting after editing (Matt Graves had a couple specific criticisms of the post). I only posted this on LW, not on my blog or Medium. I didn’t promote it on Twitter except to retweet someone who was already tweeting about it. I don’t think such reputation risk reduction on my part was morally obligatory (it would be really problematic to require people complaining about X organization to get approval from someone working at that organization), just possibly helpful anyway.
Spending more than this amount of effort managing reputation risks would seriously risk important information not getting published at all, and too little of that info being published would doom the overall ambitious world-saving project by denying it relevant knowledge about itself. I’m not saying I acted optimally, just, I don’t see the people complaining about this making a better tradeoff in their own actions or advising specific policies that would improve the tradeoff.
I think that’s literally true, but also they way you wrote this sentence implies that that is unusual or uncommon.
I think that’s backwards. If a person was intentionally and deliberately motivated to scapegoat some other person or group, it is an effective rhetorical move to say “I’m not trying to punish them, I just want to talk freely about some harms.”
By pretending that you’re not attacking the target, you protect yourself somewhat from counter attack. Now you can cause reputational damage, and if people try to punish you for doing that, you can retreat to the Motte of “but I was just trying to talk about what’s going on. I specifically said not to punish any one!”
This also seems to strong too me. I expect that many movement EAs will read the post Zoe’s and think “well, that’s enough information for me to never have anything to do with Geoff or Leverage.” This isn’t because they’re not interested in justice, it’s because they don’t have time time or the interest to investigate every allegation, so they’re using some rough heuristics and policies such as “if something looks sufficiently like a dangerous cult, don’t even bother giving it the benefit of the doubt.”
When I was drafting my comment, the original version of the text you first quoted was, “Anyone using this piece to scapegoat needs to ignore the giant upfront paragraph about ‘HEY DON’T USE THIS TO SCAPEGOAT’ (which people are totally capable of ignoring)”, guess I should have left that in there. I don’t think it’s uncommon to ignore such disclaimers, I do think it actively opposes behaviors and discourse norms I wish to see in the world.
I agree that putting a “I’m not trying to blame anyone” disclaimer can be a pragmatic rhetorical move for someone attempting to scapegoat. There’s an alternate timeline version of Jessica that wrote this post as a well crafted, well defended rhetorical attack, where the literal statements in the post all clearly say “don’t fucking scapegoat anyone, you fools” but all the associative and impressionistic “dark implications” (Vaniver’s language) say “scapegoat CFAR/MIRI!” I want to draw your attention to the fact that for a potential dark implication to do anything, you need people who can pick up that signal. For it to be an effective rhetorical move, you need a critical mass of people who are well practiced in ignoring literal speech, who understand on some level that the details don’t matter, and are listening in for “who should we blame?”
To be clear, I think there is such a critical mass! I think this is very unfortunate! (though not awkward, as Scott put it) There was a solid 2+ days where Scott and Vaniver’s insistence on this being a game of “Scapegoat Vassar vs scapegoat CFAR/MIRI” totally sucked me in, and instead of reading the contents of anyone’s comments I was just like “shit, who’s side do I join? How bad would it be if people know I hung out with Vassar once? I mean I really loved my time at CFAR, but I’m also friends with Ben and Jess. Fuck, but I also think Eli is a cool guy! Shit!” That mode of thinking I engaged in is a mode that can’t really get me what I want, which is larger and larger groups of people that understand scapegoating dynamics and related phenomena.
Okay, I think my statement was vague enough to be mistaken for a statement I think is too strong. Though I expect you might consider my clarification too strong as well :)
I was thinking about the “in any way that matters” part. I can see how that implies a sort of disregard for justice that spans across time. Or more specifically, I can see how you would think it implies that certain conversations you’ve had with EA friends were impossible, or that they were lying/confabulating the whole convo, and you don’t think that’s true. I don’t think that’s the case either. I’m thinking about it as more piece-wise behavior. One will sincerely care about justice, but in that moment where they read Jess’s post, ignore the giant disclaimer about scapegoating, and try to scapegoat MIRI/CFAR/Leverage, in that particular moment the cognitive processes generating their actions aren’t aligned with justice, and are working against it. Almost like an “anti-justice traumatic flashback” but most of the time it’s much more low-key and less intense than what you will read about in the literature on flashback. Malcolm Ocean does a great job of describing this sort of “falling into a dream” in his post Dream Mashups (his post is not about scapegoating, its about ending up running a cognitive algo that hurts you without noticing).
To be clear, I not saying such behavior is contemptible, blameworthy, bad, or to-be-scapegoated. I am saying it’s very damaging, and I want more people to understand how it works. I want to understand how it works more. I would love to not get sucked into as many anti-justice dreams where I actively work against creating the sort of world I want to live in.
So when I said “not aligned with justice in any important relevant way”, that was more a statement about “how often and when will people fall into these dreams?” Sorta like the concept of “fair weather friend”, my current hunch is that people fall into scapegoating behavior exactly when it would be most helpful for them to not. While reading a post about “here’s some problems I see in this institution that is at the core of our community” is exactly when it is most important for one’s general atemporal commitment to justice to be present in one’s actual thoughts and actions.