I don’t suppose I could persuade you to write up a post with what you consider to be some of the most important insights from network theory? I’ve started to think myself that some of our models that we tend to use within the rationality community are overly simplistic.
That’s interesting. I would expect that New York would be a large enough city that it should be possible to build up a strong community there.
Thanks for writing this post, this is a worry that I have as well.
I also believe that more could be done to build the global rationality community. I mean, I’m certainly keen to see the progress with LW2.0 and the new community section, but if we really want rationality to grow as a movement, we at least need some kind of volunteer organisation responsible for bringing this about. I think the community would be much more likely to grow if there was a group doing things like advising newly started groups, producing materials that groups could use or creating better material for beginners.
“While this worst-case scenario could apply to any large-scale rationalist project, with regards to AI alignment, if the locus of control for the field falls out of the hands of the rationality community, someone else might notice and decide to pick up that slack. This could be a sufficiently bad outcome rationalists everywhere should pay more attention to decreasing the chances of it happening.”—what would be wrong with this?
This article is very much along the same lines:
″Illusion of Explanatory Depth: Rozenblit and Keil have demonstrated that people tend to be overconfident in how well they understand how everyday objects, such as toilets and combination locks, work; asking people to generate a mechanistic explanation shatters this sense of understanding. The attempt to explain makes the complexity of the causal system more apparent, leading to a reduction in judges’ assessments of their own understanding.
… Across three studies, we found that people have unjustified confidence in their understanding of policies. Attempting to generate a mechanistic explanation undermines this illusion of understanding and leads people to endorse more moderate positions. Mechanistic explanation generation also influences political behavior, making people less likely to donate to relevant advocacy groups. These moderation effects on judgment and decision making do not occur when people are asked to enumerate reasons for their position. We propose that generating mechanistic explanations leads people to endorse more moderate positions by forcing them to confront their ignorance. In contrast, reasons can draw on values, hearsay, and general principles that do not require much knowledge.
… More generally, the present results suggest that political debate might be more productive if partisans first engaged in a substantive and mechanistic discussion of policies before engaging in the more customary discussion of preferences and positions.″
“Why, then, is there systematic bias?… But the rest of the time? It’s because we predict it’s socially helpful to be biased that way.”
Similar thoughts led me to write De-Centering Bias. We have a bias towards biases (yes, I know the irony). True rationality isn’t just eliminating biases, but also realising that they are often functional.
Hmm, was this really Ezra’s point as opposed to a steelmanned version? My impression was that he insisted on any discussion of the science also involving a discussion of past prejudice. He also seemed to be against giving Murray a platform because of his policy positions.
Hmm, well the article has an example, but it is super long and I’m trying to avoid this becoming political. Any suggestions for examples?
“We unfortunately live in a world where sometimes A implies C, but A & B does not imply C, for some values of A, B, C. So, if you’re talking about A and C, and I bring up B, but you ignore it because that’s ”sloppy thinking“, then that’s your problem. There is nothing valid about it.”—What kind of “implies” are you talking about? Surely not logical implications, but rather the connotations of words? If so, I think I know what I need to clarify.
I didn’t comment on what norms should be in wider society, just that low decoupling spaces are vital. I was going to write this in my previous comment, but I had to run out the door. John Nerst explains “empathy” much more in his post.
Any links to where this has already been discussed?
“″high decoupling” is something you do because you’re lazy and unwilling to deal with all the couplings of the real world″ - I suspect you don’t quite understand what high decoupling is. Have you read Local Validity as a Key to Sanity and Civilisation? High decoupling conversations allow people to focus on checking the local validity of their arguments.
Actually, I like Decoupling vs. Contextualising more too, especially as they become single words.
This is one of those comments that presents itself as contradicting the post, but actually doesn’t.
Would love to see this attempted, although it seems that in order to be worthwhile the person would most likely have to be co-located with the team. Also, if the organisation later receives funding, the amount of prestige/influence of those taking this role will seem to drop or they might even become completely obsolete.
I suspect there’s a practise effect here as well. Figuring out how to be assertive without being domineering or bossy is hard. People who have grown up being assertive will have had the opportunity to learn, but those who try to become assertive because they know its important for the workplace won’t have developed the judgement yet.
“So, if you believe that all questions are useful, then there is no way I’ll convince you that some hypotheticals are useless”—that’s purely a function of proving a negative being difficult in general. Why do you expect this to be easy?
“My disagreement is with the abstraction and universality of application, not necessarily with the thesis itself”—I don’t quite follow this issue. I don’t claim that a non-direct application always exists, just that they often do. And trying to figure out when they do or don’t is comparable to trying to figure out if a random bit of maths has any real world applications. You could try to just check a bunch of possibilities, but there could always be one that you just didn’t think of.
“Those hypotheticals which I choose not to engage without lots of specificity are the ones where I think the details matter, and the unreal-ness is making assumptions about those details or asserting that they don’t matter.”—I have no issue with someone pointing out that analysis of a hypothetical shouldn’t be directly applied. However, many people seem to insist that a hypothetical includes factor X, including when factor X would massively complicate the model and distract from the purpose of the exercise.
“What if this modeling explains 99% of moral choices, and when you remove it you’re left with nothing but noise?”—Even if it only applies to 1% of situations, it shouldn’t be rounded off to zero. After all, there’s a decent chance you’ll encounter at least one of these situations within your lifetime. But more importantly, this is addressed by the section on Practise Exercise Don’t Need to Be Real.
“Or, what if this modeling is hard coded into my brain, and is literally impossible to turn off?”—I view this similarly to showing someone a really complicated maths proof and them saying, “Given my brain, it’s literally impossible for me to understand a proof this complicated”. In this case you’ll just have to trust other people then. However if, like philosophy, experts disagree, well I suppose you’ll just have to figure out which experts to trust. But that said, I’m skeptical that this is the kind of thing hardcoded into anyone’s brain.
“I’m trying to show that even the simplest an most innocent looking unrealistic problems could be hiding faulty assumptions.”—The floating abstract model doesn’t contain these assumptions. You’ve made the assumption that the model is supposed to be directly applied, which is unwarranted.