Worried that I might already be a post-rationalist. I’m very interested in minimizing miscommunication, and helping people through the uncanny valley of rationality. Feel free to pm me about either of those things.
Hazard
I can visibly see you training him, via verbal conversation, how to outperform the vast majority of journalists at talking about epistemics.
Metz doesn’t seem any better at seeming like he cares about or thinks at all about epistemics than he did in 2021.
Thinking about people I know who’ve met Vassar, the ones who weren’t brought up to go to college* seem to have no problem with him and show no inclination to worship him as a god or freak out about how he’s spooky or cultish; to them, he’s obviously just a guy with an interesting perspective.
This is very interesting to me! I’d like to hear more about how the two group’s behavior looks diff, and also your thoughts on what’s the difference that makes the difference, what are the pieces of “being brought up to go to college” that lead to one class of reactions?
One reason I’m finding it hard to give advice is because though it does feel like there’s a generalizable “shape” of this problem, it’s got a lot of degrees of freedom and I have seen the detailed way in which my own history has filled those in.
That aside, two guess on “if you have it”. If you have strong feelings/beliefs about what sort of emotional reactions you should have to things, that feels relevant. Depending on the person, this might not even feel like a guilty should. I have held that I’m “not an angry person.” Digging into that you’d find that I hate when people are overtly angry, it can make my blood boil, and “I’m not an angry person” is some top down, “people who get angry are sub-human, obviously I’m better than that.” This seems relevant because this has been the fuel/motivation for me to ignore my emotions.
Also, if you ever explicitly go, “I’m just not going to feel this way anymore” that might be relevant. As mentioned, mine was not a secret under the radar ignoring emotions. I was aware of doing “something”, I just thought that something was “being in control and shifting my mood.”
The thing that originally set me off on the noticing path that lead to now, was realizing that I’d shut off a lot of my ability to organically want. This became apparent from times when I’d go, “cool, I really don’t have to do anything this weekend, what do I want to do?” *crickets*
On the most abstract level of “what to do”, I’d say try and make your mind a safe place. Do things in the self compassion space. When I went to CFAR, someone gave a lightning talk where they demoed going through some compassionate self talk in front of us, and that had a strong impact on me.
HOLY shit! I just checked out the new concepts portion of the site that shows you all the tags. This feels like a HUGE step in the direction the LW team’s vision of a place where knowledge production can actually happen.
When I was drafting my comment, the original version of the text you first quoted was, “Anyone using this piece to scapegoat needs to ignore the giant upfront paragraph about ‘HEY DON’T USE THIS TO SCAPEGOAT’ (which people are totally capable of ignoring)”, guess I should have left that in there. I don’t think it’s uncommon to ignore such disclaimers, I do think it actively opposes behaviors and discourse norms I wish to see in the world.
I agree that putting a “I’m not trying to blame anyone” disclaimer can be a pragmatic rhetorical move for someone attempting to scapegoat. There’s an alternate timeline version of Jessica that wrote this post as a well crafted, well defended rhetorical attack, where the literal statements in the post all clearly say “don’t fucking scapegoat anyone, you fools” but all the associative and impressionistic “dark implications” (Vaniver’s language) say “scapegoat CFAR/MIRI!” I want to draw your attention to the fact that for a potential dark implication to do anything, you need people who can pick up that signal. For it to be an effective rhetorical move, you need a critical mass of people who are well practiced in ignoring literal speech, who understand on some level that the details don’t matter, and are listening in for “who should we blame?”
To be clear, I think there is such a critical mass! I think this is very unfortunate! (though not awkward, as Scott put it) There was a solid 2+ days where Scott and Vaniver’s insistence on this being a game of “Scapegoat Vassar vs scapegoat CFAR/MIRI” totally sucked me in, and instead of reading the contents of anyone’s comments I was just like “shit, who’s side do I join? How bad would it be if people know I hung out with Vassar once? I mean I really loved my time at CFAR, but I’m also friends with Ben and Jess. Fuck, but I also think Eli is a cool guy! Shit!” That mode of thinking I engaged in is a mode that can’t really get me what I want, which is larger and larger groups of people that understand scapegoating dynamics and related phenomena.
This also seems to strong to me. I expect that many movement EAs will read the post Zoe’s and think “well, that’s enough information for me to never have anything to do with Geoff or Leverage.” This isn’t because they’re not interested in justice, it’s because they don’t have time time or the interest to investigate every allegation, so they’re using some rough heuristics and policies such as “if something looks sufficiently like a dangerous cult, don’t even bother giving it the benefit of the doubt.”
Okay, I think my statement was vague enough to be mistaken for a statement I think is too strong. Though I expect you might consider my clarification too strong as well :)
I was thinking about the “in any way that matters” part. I can see how that implies a sort of disregard for justice that spans across time. Or more specifically, I can see how you would think it implies that certain conversations you’ve had with EA friends were impossible, or that they were lying/confabulating the whole convo, and you don’t think that’s true. I don’t think that’s the case either. I’m thinking about it as more piece-wise behavior. One will sincerely care about justice, but in that moment where they read Jess’s post, ignore the giant disclaimer about scapegoating, and try to scapegoat MIRI/CFAR/Leverage, in that particular moment the cognitive processes generating their actions aren’t aligned with justice, and are working against it. Almost like an “anti-justice traumatic flashback” but most of the time it’s much more low-key and less intense than what you will read about in the literature on flashback. Malcolm Ocean does a great job of describing this sort of “falling into a dream” in his post Dream Mashups (his post is not about scapegoating, its about ending up running a cognitive algo that hurts you without noticing).
To be clear, I not saying such behavior is contemptible, blameworthy, bad, or to-be-scapegoated. I am saying it’s very damaging, and I want more people to understand how it works. I want to understand how it works more. I would love to not get sucked into as many anti-justice dreams where I actively work against creating the sort of world I want to live in.
So when I said “not aligned with justice in any important relevant way”, that was more a statement about “how often and when will people fall into these dreams?” Sorta like the concept of “fair weather friend”, my current hunch is that people fall into scapegoating behavior exactly when it would be most helpful for them to not. While reading a post about “here’s some problems I see in this institution that is at the core of our community” is exactly when it is most important for one’s general atemporal commitment to justice to be present in one’s actual thoughts and actions.
This was incredibly enjoyable to read! I think you did a very good job of making it easy to read without dumbing it down. Though I’m not well versed in the core math of this post, I still feel like I managed to get some useful gist from it, and I also don’t feel like I’ve been tricked into thinking I understand more than I do.
I’m still mad how much the outside world seems to appreciate when you’re half-dead inside...
Oof, I haven’t thought directly about that before, but man that can sting.
Part of that seems to be the a basic part of “you’re the only one in your own head.” Other people have limited ability to know what I feel like, but can visibly tell whether or not I rage at other people. I get congratulated for not raging at people in tense situations, and it feels like I’m getting praised for the internal thing (ignoring my emotions).
Highlighting the parts that felt important:
I think the frame in which it’s important to evaluate global states using simple metrics is kind of sketchy and leads to people mistakenly thinking that they don’t know what’s good locally.
[...]
Only if you go all the way to the extreme of total central planning do you really need a single totalizing metric, so to some extent proposing such a metric is proposing a totalitarian central planner, or at least a notional one like a god.
[...]
“I don’t know what a single just soul looks like, so let’s figure out what an ENTIRE PERFECTLY JUST CITY looks like, and then assume a soul is just a microcosm of that.”
I can see ways in which my own thinking has fallen into the frame you mention in the first quote. It’s an interesting and subtle transition, going from asking, “What is it best for me to do?” to “What is it best for a human to do?”/”What would it be best for everyone to be doing?”. I notice that I feel very compelled to make this transition when thinking.
Thing I’ve noticed about status/prestige.
When I first started doing parkour I didn’t have any friends in the hobby. I also didn’t watch parkour videos on youtube. I was mostly dicking about on my own, and whenever a non-parkour person would compliment me on my skill it would feel great and I’d revel in it. Later, there was a phase were I still had no friends doing parkour, but I was watching some of the best athletes in the world on youtube. At that time, whenever someone complimented me, it felt like I was cheating, and/or secretly low-status and they just hadn’t gotten the memo.
The past two years, I’ve found a group of friends to train parkour with, and I’m at the top of the skill ladder in that group. Now a days I notice I don’t feel the same pangs of “I’m cheating”.
This makes me think that what exactly your “local community” is can be finnicky. My local community went from no one, the whole world, to 9 people. What’s interesting is that it’s not as if I forgot that there’s a whole world of professional parkour athletes. It seems like I was able to feel like I had more status because my local community had more “weight” than the rest of the world.
My experience would generate the advice: the more you interact with a larger global group where you are outclassed/low-status, the more you need to interact with a smaller local group where you are high-status.
Another thought on ways it can be unhelpful to cry motte and Bailey. I don’t know what’s the split between, “People who exhibit motte and bailey behavior and do so as a tactic”, vs “those who accidentally exhibit motte and bailey behavior”, but I find myself accidentally internally doing motte and baileys far more often than is comfortable. I’m getting better at noticing when there’s the switch, when my brain starts using a seperate model, but it’s a very faint feeling. If I someone was actually and accidentally doing a motte and bailey, they might really not notice the switch, and shouting “Aha, motte and Bailey!” feels like an easy way to get them on the defensive and not actually think about if they switched models.
(Example, I’ve been reading Consciousness Explained, and I’m onboard with the idea that there’s no Cartesian theater, but I definitely haven’t completely “worked it out of my system” and you could totally catch me doing a motte and bailey on that topic)
- 4 Jun 2018 18:30 UTC; 2 points) 's comment on Against accusing people of motte and bailey by (
I’m not sure I’m thinking about the same thing you are, so let me know what you thing of these examples:
“Become a well known writer/blogger”
“Start a popular meetup for Y topic”
“Get respected in a community”
“Make a viral video”
Me phrasing what I think is your point:
Some of the most readily imaginable “things to do” are identified by their effects on social reality (make something popular, be respected). Learning to shape social reality is a skill in itself, but if you mistakenly believe that you are learning how to shape reality you will hit problems when you are confronted with a problem that requires shaping reality.
This chunk felt like the biggest difference between meta-honesty and “tire slash”:
Harry shook his head. “No,” said Harry, “because then if we weren’t enemies, you would still never really be able to trust what I say even assuming me to abide by my code of honesty. You would have to worry that maybe I secretly thought you were an enemy and didn’t tell you.
If I’m following the old rule, you probably want to know in what situations I’d feel good slashing your tires. If I actually felt okay slashing your tires, I’d probably also be invested in making you falsely belief I wouldn’t slash your tires. This makes it hard to super soundly, within one’s honesty code, let someone know when you would or wouldn’t be lying to them.
If I’m following meta-honesty, it seems like I can say, “I wouldn’t lie to you about being on your side unless XYZ doomsday scenario”, and that claim is as sound as my claim to be meta-honest. Now, if I say I’m on your side (not going to slash tires / lie), and you trust my claim to be meta-honest, you can believe me with whatever probability you assign to us not currently being in a doomsday scenario.
Your general attitude seems to be taking the problem of coordination too lightly. Eliezer’s recent book has a lot of good thinking on what exactly makes bad equilibria so hard to escape from. Though I’d never discourage you from trying to solve a hard problem, it seems like you’re saying “We can fix coordination problems by just coordinating!”
I actually do like the call-a-week idea. I forsee a lot of problems with “call a random rationalist each week for an hour”, but they seem far more solvable than “fix coordination in general”.
Really what I want is for Kaj’s entire sequence to be made into a book. Barring that, I’ll settle for nominating this post.
- 5 Dec 2020 18:24 UTC; 13 points) 's comment on Sequence introduction: non-agent and multiagent models of mind by (
I haven’t done a full re-read, but I have re-read certain chapters. It was hella helpful. The experience was often, “Ohhhh, I only got the shadow of the idea on my first pass, it’s grown since then but has been scattered, and the reread let me unify the ideas and feel confident I’m now getting the core idea and it’s repercussions.”
Thanks for writing this! I’m glad you’ve found a new trajectory, and it looks like you’ve done a decent amount to process and integrate RAISE not having worked out. Best of luck on the next chapter.
I both approve of this problem solving method and realize I don’t know what’s going on in the minds of people you have needed to defend this idea to.
I’d paraphrase your idea as running with the hypothetical “what if these ideas were connected?” A huge amount of my creative leaps come from exploring “what if”s. It feels very simple to keep my “what if” explorations seperate from my most rigorous known truths, at least for intellectual topics.
So an actual question that would help me understand more is “what have other people said in conversations were you were defending this idea?”
I’m not sure what writing this comment felt like for you, but from my view it seems like you’ve noticed a lot of the dynamics about scapegoating and info-suppression fields that Ben and Jessica have blogged about in the past (and occasionally pointed out in the course of these comments, though less clearly). I’m going to highlight a few things.
I do think that Jessica writing this post will predictably have reputational externalities that I don’t like and I think are unjustified.
Broadly, I think that onlookers not paying much attention would have concluded from Zoe’s post that Leverage is a cult that should be excluded from polite society, and hearing of both Zoe’s and Jessica’s post, is likely to conclude that Leverage and MIRI are similarly bad cults.
I totally agree with this. I also think that to the degree to which an “onlooker not paying much attention” concludes this is the degree to which they are habituated to engaging with discussion of wrongdoing as scapegoating games. This seems to be very common (though incredibly damaging) behavior. Scapegoating works on the associative/impressionistic logic of “looks”, and Jessica’s post certainly makes CFAR/MIRI “look” bad. This post can be used as “material” or “fuel” for scapegoating, regardless of whether Jessica’s intent in writing it. Though it can’t be used honestly to scapegoat (if there even is such a thing). Anyone using this piece to scapegoat needs to ignore the giant upfront paragraph about “HEY, DON’T USE THIS TO SCAPEGOAT”, and has no plausible claim to doing justice, upholding rules, or caring about the truth of the matter in any important relevant sense.
(aside, from both my priors on Jess and my reading of the post it was clear to me that Jess wasn’t trying to scapegoat CFAR/MIRI. It also simply isn’t in Jess’s interests for them to be scapegoated)
Another thought: CFAR/MIRI already “look” crazy to most people who might check them out. UFAI, cryonics, acausal trade, are all things that “look” crazy. And yet we’re all able to talk about them on LW without worry about “how it looks” because over many many conversations, many sequences, blog posts, comments, etc have created a community with different common knowledge about what will result in people ganging up on you.
Something that we as a community don’t talk a lot about is power structures, coercion, emotional abuse, manipulation, etc. We don’t collectively build and share models on their mechanics and structure. As such, I think it’s expected that when “things get real” people abandon commitment to the truth in favor of “oh shit, there’s an actual conflict, I or others could be scapegoated, I am not safe, I need to protect my people from being scapegoated at all cost”.
However, I think that we mostly shouldn’t be in the business of trying to carter to bystanders who are not invested in understanding what is actually going on in detail, and we especially should not compromise the discourse of people who are invested in understanding.
I totally agree, and I think if you explore this sense you already sorta see how commitment to making sure things “look okay” quickly becomes a commitment to suppress information about what happened.
(aside, these are some of Ben’s post that have been most useful to me for understanding some of this stuff)
“People are over sensitive to ostracism because human brains are hardwired to be sensitive to it, because in the ancestral environment it meant death.”
Evopsyche seems mostly overkill for explaining why a particular person is strongly attached to social reality.
People who did not care what their parents or school-teachers thought of them had a very hard time. “Socialization” as the process of the people around you integrating you (often forcefully) into the local social reality. Unless you meet a minimum bar of socialization, it’s very common to be shunted through systems that treat you worse and worse. Awareness of this, and the lasting imprint of coercive methods used to integrate one into social reality, seem like they can explain most of an individuals resistance to breaking from it.
I really like the “positive reviews should look like X, negative reviews should look like Y” information. I’ve never seen it before, and I expect it to actually be useful when looking for resources.
I’m cofused by how “deep” and “surface” are being used in your first picture. From how the “What” and “How” books are described (and from the examples you give), I would have called “What” the deep resource, and “How” as the “surface level” resource. How are you thinking of it?