In our argument in the comments to my post on zetetic explanations, I was a bit worried about pushing back too hard socially. I had a vague sense that there was something real and bad going on that your behavior was a legitimate immune response to, and that even though I thought and continue to think that I was a false positive, it seemed pretty bad to contribute to marginalization of one of the only people visibly upset about some sort of hard-to-put-my-finger-on shoddiness going on. It’s very important to the success of an epistemic community to have people sensing things like this, and promote that sort of alarm.
I’ve continued to try to track this, and I can now see somewhat more clearly a really sketchy pattern, which you’re one of the few people to consistently call out when it happens. This comment is a good example. It seems like there’s a tendency to conflate the stated ambitions and actual behavior of ingroups like Rationalists and EAs, when we wouldn’t extend this courtesy to the outgroup, in a way that subtly shades corrective objections as failures to get with the program.
This kind of thing is insidious, and can be done by well-meaning people. While I still think my zetetic explanation post was a different sort of slackness, there was a time when I’d have written posts like Donald Hobson’s, and I wasn’t intentionally trying to fool anyone. I was just … a certain flavor of enthusiastic and hopeful that gets a pass when and only when it flatters the ingroup’s prejudices.
I think it’s helpful and important for you to continue to point out object-level errors like this one, but it’s also important to track which errors seem like part of a pattern of motivated error, and which seem to be mere mistakes. The former class seems much more dangerous to me, since such errors are correlated.
Thank you for the encouragement, and I’m glad you’ve found value in my commentary.
… it’s also important to track which errors seem like part of a pattern of motivated error, and which seem to be mere mistakes. The former class seems much more dangerous to me, since such errors are correlated.
I agree with this as an object-level policy / approach, but I think not quite for the same reason as yours.
It seems to me that the line between “motivated error” and “mere mistake” is thin, and hard to locate, and possibly not actually existent. We humans are very good at self-deception, after all. Operating on the assumption that something can be identified as clearly being a “mere mistake” (or, conversely, as clearly being a “motivated error”) is dangerous.
That said, I think that there is clearly a spectrum, and I do endorse tracking at least roughly in which region of the spectrum any given case lies, because doing so creates some good incentives (i.e., it avoids disincentivizing post-hoc honesty). On the other hand, it also creates some bad incentives, e.g. the incentive for the sort of self-deception described above. Truthfully, I don’t know what the optimal approach is, here. Constant vigilance against any failures in this whole class is, however, warranted in any case.
In our argument in the comments to my post on zetetic explanations, I was a bit worried about pushing back too hard socially. I had a vague sense that there was something real and bad going on that your behavior was a legitimate immune response to, and that even though I thought and continue to think that I was a false positive, it seemed pretty bad to contribute to marginalization of one of the only people visibly upset about some sort of hard-to-put-my-finger-on shoddiness going on. It’s very important to the success of an epistemic community to have people sensing things like this, and promote that sort of alarm.
I’ve continued to try to track this, and I can now see somewhat more clearly a really sketchy pattern, which you’re one of the few people to consistently call out when it happens. This comment is a good example. It seems like there’s a tendency to conflate the stated ambitions and actual behavior of ingroups like Rationalists and EAs, when we wouldn’t extend this courtesy to the outgroup, in a way that subtly shades corrective objections as failures to get with the program.
This kind of thing is insidious, and can be done by well-meaning people. While I still think my zetetic explanation post was a different sort of slackness, there was a time when I’d have written posts like Donald Hobson’s, and I wasn’t intentionally trying to fool anyone. I was just … a certain flavor of enthusiastic and hopeful that gets a pass when and only when it flatters the ingroup’s prejudices.
I think it’s helpful and important for you to continue to point out object-level errors like this one, but it’s also important to track which errors seem like part of a pattern of motivated error, and which seem to be mere mistakes. The former class seems much more dangerous to me, since such errors are correlated.
Thank you for the encouragement, and I’m glad you’ve found value in my commentary.
I agree with this as an object-level policy / approach, but I think not quite for the same reason as yours.
It seems to me that the line between “motivated error” and “mere mistake” is thin, and hard to locate, and possibly not actually existent. We humans are very good at self-deception, after all. Operating on the assumption that something can be identified as clearly being a “mere mistake” (or, conversely, as clearly being a “motivated error”) is dangerous.
That said, I think that there is clearly a spectrum, and I do endorse tracking at least roughly in which region of the spectrum any given case lies, because doing so creates some good incentives (i.e., it avoids disincentivizing post-hoc honesty). On the other hand, it also creates some bad incentives, e.g. the incentive for the sort of self-deception described above. Truthfully, I don’t know what the optimal approach is, here. Constant vigilance against any failures in this whole class is, however, warranted in any case.