It’s critical to pick good directions for research. But fighting about it is not only exhausting, it’s often counterproductive—it can make people tune out “the opposition.”
In this case, you’ve been kind enough about it, and the community here has good enough standards (amazing, I think, relative to the behavior of the average modern hominid) that one of the primary proponents of the approach you’re critiquing started his reply with “thank you”.
This gives me hope that we can work together and solve the several large outstanding problems.
I often think of writing critiques like this, but I don’t have the standing with the community for people to take them seriously. You do.
So I hope this one doesn’t cause you headaches, and thanks for doing it.
That’s a good example. LW is amazing that way. My previous field of computational cognitive neuroscience, and its surrounding fields, did not treat challenges with nearly that much grace or truth-seeking.
I’ll quit using that as an excuse to not say what I think is important—but I will try to say it politely.
Writing (good) critiques is, in fact, a way many people gain standing. I’d push back on the part of you that thinks all of your good ideas will be ignored (some of them probably will be, but not all of them; don’t know until you try, etc).
I’m not worried about my ideas being ignored so much as actively doing harm to the group epistemics by making people irritated with my pushback, and by association, irritated with the questions I raise and therefore resistant to thinking about them.
LessWrong does seem way less prone to motivated reasoning. I think this is because rationalism demands actually being proud of changing your mind. This value provides resistance but not immunity to motivated reasoning. I want to write a post about this.
If you wrote this exact post, it would have been upvoted enough for the Redwood team to see it, and they would have engaged with you similarly to how they engaged with John here (modulo some familiarity, because theyse people all know each other at least somewhat, and in some pairs very well actually).
If you wrote several posts like this, that were of some quality, you would lose the ability to appeal to your own standing as a reason not to write a post.
This is all I’m trying to transmit.
[edit: I see you already made the update I was encouraging, an hour after leaving the above comment to me. Yay!]
Thank you for writing this, John.
It’s critical to pick good directions for research. But fighting about it is not only exhausting, it’s often counterproductive—it can make people tune out “the opposition.”
In this case, you’ve been kind enough about it, and the community here has good enough standards (amazing, I think, relative to the behavior of the average modern hominid) that one of the primary proponents of the approach you’re critiquing started his reply with “thank you”.
This gives me hope that we can work together and solve the several large outstanding problems.
I often think of writing critiques like this, but I don’t have the standing with the community for people to take them seriously. You do.
So I hope this one doesn’t cause you headaches, and thanks for doing it.
Object-level discussion in a separate comment.
I think criticisms from people without much of a reputation are often pretty well-received on LW, e.g. this one.
That’s a good example. LW is amazing that way. My previous field of computational cognitive neuroscience, and its surrounding fields, did not treat challenges with nearly that much grace or truth-seeking.
I’ll quit using that as an excuse to not say what I think is important—but I will try to say it politely.
Writing (good) critiques is, in fact, a way many people gain standing. I’d push back on the part of you that thinks all of your good ideas will be ignored (some of them probably will be, but not all of them; don’t know until you try, etc).
I’m not worried about my ideas being ignored so much as actively doing harm to the group epistemics by making people irritated with my pushback, and by association, irritated with the questions I raise and therefore resistant to thinking about them.
I am pretty sure that motivated reasoning does that, and it’s a huge problem for progress in existing fields. More here: Motivated reasoning/confirmation bias as the most important cognitive bias
LessWrong does seem way less prone to motivated reasoning. I think this is because rationalism demands actually being proud of changing your mind. This value provides resistance but not immunity to motivated reasoning. I want to write a post about this.
If you wrote this exact post, it would have been upvoted enough for the Redwood team to see it, and they would have engaged with you similarly to how they engaged with John here (modulo some familiarity, because theyse people all know each other at least somewhat, and in some pairs very well actually).
If you wrote several posts like this, that were of some quality, you would lose the ability to appeal to your own standing as a reason not to write a post.
This is all I’m trying to transmit.
[edit: I see you already made the update I was encouraging, an hour after leaving the above comment to me. Yay!]