I chose this particular post to review because I think it does a great job of highlighting soe of the biases and implicit assumptions that Zack makes throughout the rest of the sequence. Therefore this review should be considered not just a review of this post, but also all subsequent posts in Zack’s sequence.
Firstly, I think the argument Zack is making here is reasonable. He’s saying that if a fact is relevant to an argument it should be welcome, and if it’s not relevant to an argument it should not be.
Throughout the rest of the sequence, he continues to justify this basic position with underlying epistemology and math. He makes that case that language is there to help you make predictions, shows how hard it is to predict people’s views when there are things they aren’t allowed to talk about. He makes the case that drawing boundaries on natural categories is important to make language useful.
However, I believe that throughout the sequence, the position he’s implicitly arguing against is the way he defines contextualizing in a reply to me here:
Contextualizers” think that the statement “Green-eyed people commit twice as many murders” creates an implicature that ”… therefore green-eyed people should be stereotyped as criminals” that needs to be explicitly canceled with a disclaimer, which is an instance of the more general cognitive process by which most people think that “The washing machine is by the stairs” creates an implicature of ”… and the machine works” that, if it’s not true, needs to be explicitly canceled with a disclaimer (”… but it’s broken”). “Decouplers” don’t think the statement about murder rates creates an implicature about stereotyping.
And, I believe this is a straw man. In particular, I believe Zack is being quite idealistic, building up a model of how language should be used, while ignoring the many ways language is actually used in practice.
If indeed, language were only used for clearing up confusions, I think the argument would be quite one sided, and language should in fact only be used in the way’s Zack suggests. However, here’s a list of ways that language is used that don’t fit cleanly into that category:
Convincing and motivating people to take action
Creating positive or negative affect towards a particular idea or group.
Quoting someone out of context to portray their positions a certain way.
Describing a felt sense, or evoking it in someone else.
In general, the language I use isn’t just changing the predictions that people make, it’s affecting their emotions, it can be quoted or used to paint others in a certain light, it’s being crosschecked with other similar language and affecting people’s emotional affect.
And the rest of the sequence largely ignores this. It paints the argument between a robust case that using language correctly helps you make more effective predictions, vs the choice to not hurt others feelings.
But the truth is, there’s a whole set of powerful consequentialist arguments about why you might want to consider the context of things like how it will effect the other persons affect towards what you’re talking about, how it will be quoted, how it could be used to paint you or the groups your affiliated with in a certain light, etc.
I don’t believe a sequence that has such an implicit straw man should be included in the common knowledge of lesswrong, and I believe this largely for many of the broader contextualizing arguments around the effect that this implicit straw man could have on the broader culture of LW.
Ironically enough for Zack’s preferred modality, you’re asserting that even though this post is reasonable when decoupled from the rest of the sequence, it’s worrisome when contextualized.
On one hand, I think I probably agree with the overall thrust of your criticism. But I don’t think I endorse it in the context of the review.
Some of the posts in the review are sort of a stand-in for a whole sequence (Moral Mazes, Multiagent Models), but I don’t think Zack’s posts are (or at least I have not interpreted them as such). So I think it makes more sense to look at the given particular posts up for review and for each of them ask “okay, is the writing on this particular post embodying a wrong or confusing frame?”
I think the answer is yes in some cases, no in others.
In general: I probably still have some deep disagreements with Zack. (I’m not entirely sure, see below). But I also think I’ve learned a bunch from watching him follow this train of thought, and been impressed with how thoroughly he investigates it. And I don’t think makes sense to write a sequence off because the author has a frame you disagree with. I’m evaluating our intellectual progress at the group level, and I think it’s a pretty key tool in the toolkit for individuals to take their assumptions and run with them and see how far they can get.
I think it’s more useful to flag specific places where you think he’s making a mistake on individual posts, than to make a vague metacriticism.
I agree that it’s intellectually fruitful to take assumptions and run with them, but I’m wary about them being enshrined in a book, a static place without comments, that can’t be contextualized with critical comments or future work.
I think that if meta-criticisms of the implicit approaches and frames are not allowed, we can end up in similar issues that e.g. the integral community ran into where they had a lot of reasonable sounding and fruitful ideas that nevertheless ended up in quite problematic and unproductive places because no one was pointing out the subtle places where the whole methodology was incomplete or flawed.
Indeed, a lot of my worry around the particular intellectual direction of LW is informed by a look at what happened with integral and Wilbur.
I’d still argue that at the level of individual posts rather than the sequence as a whole. (In this particular context. If there was something like a “Algorithms of Deception Sequence Book” getting considered as a whole I’d have a pretty different attitude)
Hmm. I don’t I want to commit to a huge discussion of it. I’m happy to continue doing async LW comments about it. I’m busier than usual this month. There might turn out to be a day I had a spare hour or two to chat in more detail but don’t think I want to spend cognition planning around that.
I think I’ve mostly said my main piece and am fairly happy with “LW members can read what Matt and Ray have said so far and vote accordingly.” If you raise specific points on specific posts I (and others) might change their vote for those posts.
Yeah so I think my thought on this is that it’s often impossible to point at these sorts of missing frames or implicit assumptions in a single post. In my review of Liron’s post I was able to pull out a bunch of quotes pointing to some specific frames, but that’s because it was unusually dense with examples.
In the case of this post, if I were to do the same thing, I think I’d have to pull out quotes from at least 3-4 of the posts in the sequence to point to this underlying straw man (in this case I didn’t actually do that and just sort of hoped others could do it own their own through reading my review).
That seems true, but I think it still makes sense to concentrate the discussion on particular posts. (Zack specifically disavowed this post and the meta-honesty response, so I think it makes most sense to concentrate on Where To Draw The Boundaries and Heads I Win, Tails Never Heard Of Her)
I think it’s reasonable to bring up “this post seems rooted in a wrong frame” on both of those, linking to other examples. But my own voting algorithm for those posts will personally be asking “does this single post have a high overall mix of ‘true’ and ‘important’?”
I think most posts in the review, even the top posts, have something wrong with them, and in some cases I disagree with the author about which things are wrong-enough-to-warrant-fixing. I do feel that the overall review process isn’t quite solid enough for me to really endorse the Best Of book as a statement of “The LessWrong Community fully endorses this post”, and I think that’s a major problem to be fixed for next year. But meanwhile I think it makes more sense to accept that some posts will have flaws.
Zack specifically disavowed this post and the meta-honesty response, so I think it makes most sense to concentrate on Where To Draw The Boundaries and Heads I Win, Tails Never Heard Of Her
Ahh, I didn’t realize that, definitely would not have reviewed this post if I realized this was the case.
But my own voting algorithm for those posts will personally be asking “does this single post have a high overall mix of ‘true’ and ‘important’?”
Yeah I think this is reasonable. I’m worried about thinks that are wrong is subtle non-obvious ways with certain frames or assumptions because it’s easy for those to sneak in under the radar of someone’s way of thinking, but I think it’s reasonable to not worry about that as well.
I chose this particular post to review because I think it does a great job of highlighting soe of the biases and implicit assumptions that Zack makes throughout the rest of the sequence. Therefore this review should be considered not just a review of this post, but also all subsequent posts in Zack’s sequence.
Firstly, I think the argument Zack is making here is reasonable. He’s saying that if a fact is relevant to an argument it should be welcome, and if it’s not relevant to an argument it should not be.
Throughout the rest of the sequence, he continues to justify this basic position with underlying epistemology and math. He makes that case that language is there to help you make predictions, shows how hard it is to predict people’s views when there are things they aren’t allowed to talk about. He makes the case that drawing boundaries on natural categories is important to make language useful.
However, I believe that throughout the sequence, the position he’s implicitly arguing against is the way he defines contextualizing in a reply to me here:
And, I believe this is a straw man. In particular, I believe Zack is being quite idealistic, building up a model of how language should be used, while ignoring the many ways language is actually used in practice.
If indeed, language were only used for clearing up confusions, I think the argument would be quite one sided, and language should in fact only be used in the way’s Zack suggests. However, here’s a list of ways that language is used that don’t fit cleanly into that category:
Convincing and motivating people to take action
Creating positive or negative affect towards a particular idea or group.
Quoting someone out of context to portray their positions a certain way.
Describing a felt sense, or evoking it in someone else.
In general, the language I use isn’t just changing the predictions that people make, it’s affecting their emotions, it can be quoted or used to paint others in a certain light, it’s being crosschecked with other similar language and affecting people’s emotional affect.
And the rest of the sequence largely ignores this. It paints the argument between a robust case that using language correctly helps you make more effective predictions, vs the choice to not hurt others feelings.
But the truth is, there’s a whole set of powerful consequentialist arguments about why you might want to consider the context of things like how it will effect the other persons affect towards what you’re talking about, how it will be quoted, how it could be used to paint you or the groups your affiliated with in a certain light, etc.
I don’t believe a sequence that has such an implicit straw man should be included in the common knowledge of lesswrong, and I believe this largely for many of the broader contextualizing arguments around the effect that this implicit straw man could have on the broader culture of LW.
Ironically enough for Zack’s preferred modality, you’re asserting that even though this post is reasonable when decoupled from the rest of the sequence, it’s worrisome when contextualized.
On one hand, I think I probably agree with the overall thrust of your criticism. But I don’t think I endorse it in the context of the review.
Some of the posts in the review are sort of a stand-in for a whole sequence (Moral Mazes, Multiagent Models), but I don’t think Zack’s posts are (or at least I have not interpreted them as such). So I think it makes more sense to look at the given particular posts up for review and for each of them ask “okay, is the writing on this particular post embodying a wrong or confusing frame?”
I think the answer is yes in some cases, no in others.
In general: I probably still have some deep disagreements with Zack. (I’m not entirely sure, see below). But I also think I’ve learned a bunch from watching him follow this train of thought, and been impressed with how thoroughly he investigates it. And I don’t think makes sense to write a sequence off because the author has a frame you disagree with. I’m evaluating our intellectual progress at the group level, and I think it’s a pretty key tool in the toolkit for individuals to take their assumptions and run with them and see how far they can get.
I think it’s more useful to flag specific places where you think he’s making a mistake on individual posts, than to make a vague metacriticism.
I agree that it’s intellectually fruitful to take assumptions and run with them, but I’m wary about them being enshrined in a book, a static place without comments, that can’t be contextualized with critical comments or future work.
I think that if meta-criticisms of the implicit approaches and frames are not allowed, we can end up in similar issues that e.g. the integral community ran into where they had a lot of reasonable sounding and fruitful ideas that nevertheless ended up in quite problematic and unproductive places because no one was pointing out the subtle places where the whole methodology was incomplete or flawed.
Indeed, a lot of my worry around the particular intellectual direction of LW is informed by a look at what happened with integral and Wilbur.
I’d still argue that at the level of individual posts rather than the sequence as a whole. (In this particular context. If there was something like a “Algorithms of Deception Sequence Book” getting considered as a whole I’d have a pretty different attitude)
Want to doublecrux on this?
Hmm. I don’t I want to commit to a huge discussion of it. I’m happy to continue doing async LW comments about it. I’m busier than usual this month. There might turn out to be a day I had a spare hour or two to chat in more detail but don’t think I want to spend cognition planning around that.
I think I’ve mostly said my main piece and am fairly happy with “LW members can read what Matt and Ray have said so far and vote accordingly.” If you raise specific points on specific posts I (and others) might change their vote for those posts.
Yeah so I think my thought on this is that it’s often impossible to point at these sorts of missing frames or implicit assumptions in a single post. In my review of Liron’s post I was able to pull out a bunch of quotes pointing to some specific frames, but that’s because it was unusually dense with examples.
In the case of this post, if I were to do the same thing, I think I’d have to pull out quotes from at least 3-4 of the posts in the sequence to point to this underlying straw man (in this case I didn’t actually do that and just sort of hoped others could do it own their own through reading my review).
That seems true, but I think it still makes sense to concentrate the discussion on particular posts. (Zack specifically disavowed this post and the meta-honesty response, so I think it makes most sense to concentrate on Where To Draw The Boundaries and Heads I Win, Tails Never Heard Of Her)
I think it’s reasonable to bring up “this post seems rooted in a wrong frame” on both of those, linking to other examples. But my own voting algorithm for those posts will personally be asking “does this single post have a high overall mix of ‘true’ and ‘important’?”
I think most posts in the review, even the top posts, have something wrong with them, and in some cases I disagree with the author about which things are wrong-enough-to-warrant-fixing. I do feel that the overall review process isn’t quite solid enough for me to really endorse the Best Of book as a statement of “The LessWrong Community fully endorses this post”, and I think that’s a major problem to be fixed for next year. But meanwhile I think it makes more sense to accept that some posts will have flaws.
Ahh, I didn’t realize that, definitely would not have reviewed this post if I realized this was the case.
Yeah I think this is reasonable. I’m worried about thinks that are wrong is subtle non-obvious ways with certain frames or assumptions because it’s easy for those to sneak in under the radar of someone’s way of thinking, but I think it’s reasonable to not worry about that as well.