Contra Yudkowsky on Epistemic Conduct for Author Criticism

In a comment on the Effective Altruism Forum responding to Omnizoid’s “Eliezer Yudkowsky Is Frequently, Confidently, Egregiously Wrong”, Eliezer Yudkowsky writes:

You will mark that in this comment I first respond to a substantive point and show it to be mistaken before I make any general criticism of the author; which can then be supported by that previously shown, initial, first-thing, object-level point. You will find every post of the Less Wrong sequences written the same way.

As the entire post violates basic rules of epistemic conduct by opening with a series of not-yet-supported personal attacks, I will not be responding to the rest in detail. I’m sad about how anything containing such an egregious violation of basic epistemic conduct got this upvoted, and wonder about sockpuppet accounts or alternatively a downfall of EA. The relevant principle of epistemic good conduct seems to me straightforward: if you’ve got to make personal attacks (and sometimes you do), make them after presenting your object-level points that support those personal attacks. This shouldn’t be a difficult rule to follow, or follow much better than this; and violating it this hugely and explicitly is sufficiently bad news that people should’ve been wary about this post and hesitated to upvote it for that reason alone.

I agree that the dictum to refute an author’s arguments before commenting on their character or authority is good writing advice, which I generally endeavor to follow. However, I argue that Yudkowsky errs in characterizing it as a “basic rule[ ] of epistemic conduct.”

It seems to me that the reason “refutation first, character attacks only afterwards (if at all)” is good writing advice is because it guards against the all-too-human failure mode of previously intellectually fruitful conversations degenerating into ad hominem and name-calling, which are not intellectually fruitful.

When I’m debating someone about some subject of broader interest to the world—for example, stamp collecting—I want to keep the conversation’s focus on the subject of interest. If my interlocutor is untrustworthy, it might be worth arguing that to the audience in order to help them not be misled by my interlocutor’s false claims about the subject of interest. But the relevance of the character claim to the debate needs to be clearly established. The mere truth of the claim “My interlocutor is untrustworthy” is no defense if the claim is off-topic (because argument screens off authority). The audience doesn’t care about either of us. They want to hear about the stamps!

(This is probably not the only reason to avoid personal attacks, but I think it’s the important one.)

However, sometimes the character or authority of an author is the subject of interest. This is clearly the case for Omnizoid’s post. The post is not a derailment of already ongoing discussions of epiphenominalism, decision theory, and animal consciousness. Rather, the central thesis that Omnizoid is trying to convince readers of is that Yudkowsky is frequently, confidently, egregiously wrong. The aim of the article (as Omnizoid explains in the paragraphs beginning with “The aim of this article [...]”) is to discourage readers from deferring to Yudkowsky as an authority figure.

“Eliezer Yudkowsky is frequently, confidently, egregiously wrong” is a testable claim about the real world. It might be a claim of less broad interest to Society than the questions debated by students of decision theory, animal consciousness, or stamp collecting. (If someone told you that Mortimer Q. Snodgrass is frequently, confidently, egregiously wrong, you would ask, “Who is that? Why should I care?” I don’t know, either.) Nevertheless, it is a claim that someone apparently found worthwhile to write a blog post about, and fair-minded readers should hold that post to the same standards as they would a post on any other testable claim about the real world.

Yudkowsky condemns Omnizoid’s post as “violat[ing] basic rules of epistemic conduct by opening with a series of not-yet-supported personal attacks”, citing an alleged “principle of epistemic good conduct” that personal attacks must be made “after presenting your object-level points that support those personal attacks.” The conduct complaint seems to be not that Omnizoid fails to argue for their thesis, nor that the arguments are bad, but merely that the arguments appear later in the post. Yudkowsky seems pretty unambiguous in his choice of words on this point, writing “not-yet-supported”, rather than “unsupported” or “poorly supported”.

Frankly, this is bizarre. It’s pretty common for authors to put the thesis statement first! If I wrote a blog post that said, “Gummed stamps are better than self-adhesive stamps; this is because licking things is fun”, I doubt Yudkowsky would object and insist that I should have written, “Licking things is fun; therefore, gummed stamps are better than self-adhesive stamps.” But why would the rules be different when the thesis statement happens to be a claim about an author rather than a claim about stamps?

(I could understand why humans might want rules that treat claims about humans differently from claims about other things. So to clarify, when I ask, “Why would the rules be different?”, I’m talking about the real rules—the epistemic rules.)

“You will find every post of the Less Wrong sequences written the same way,” Yudkowsky writes, claiming to have observed his stated principle of good conduct. But this claim is potentially falsified by a November 2007 post by Yudkowsky titled “Beware of Stephen J. Gould”,[1] which opens with

If you’ve read anything Stephen J. Gould has ever said about evolutionary biology, I have some bad news for you. In the field of evolutionary biology at large, Gould’s reputation is mud.

before describing Gould’s alleged errors.

Indeed, “Beware of Stephen J. Gould” would seem to have essentially the same argument structure as “Eliezer Yudkowsky Is Frequently, Confidently, Egregiously Wrong”.

In the former, Yudkowsky argues that readers should distrust Stephen Jay Gould on the grounds that Gould is not only wrong, but misrepresents the consensus of relevant academic experts. (“Gould systematically misrepresented what other scientists thought; he deluded the public as to what evolutionary biologists were thinking.”)

In the latter, Omnizoid argues that readers should distrust Eliezer Yudkowsky on the grounds that Yudkowsky is not only wrong, but misrepresents the consensus of relevant academic experts. (“Eliezer’s own source that he links to to describe how unstrawmanny it is shows that it is a strawman” [...] “Eliezer admits that he has not so much as read the arguments people give” [...] “If you’re reading something by Eliezer and it seems too obvious, on a controversial issue, there’s a decent chance you are being duped.”)

Thus, it’s hard to see how “Eliezer Yudkowsky Is Frequently, Confidently, Egregiously Wrong” could be in “egregious violation of basic epistemic conduct” while “Beware of Stephen J. Gould” is not. If anything, Omnizoid does a better job of showing their work than Yudkowsky (Omnizoid taking a negative view of Yudkowsky’s predictive track record and three alleged “critical errors”, in contrast to Yudkowsky only showing the rebuttal to Gould’s thesis in Full House and merely quoting evolutionary biologists in support of the claim that such misrepresentations were “a common pattern throughout Gould’s ‘work’”). One might, of course, find Yudkowsky’s arguments contra Gould more compelling on the object level than Omnizoid’s arguments contra Yudkowsky, but that would seem to be irrelevant to the “epistemic conduct” allegation insofar as the conduct complaint is about form rather than correctness.

To be clear, this is not necessarily to accuse Yudkowsky of hypocrisy. “Beware of Stephen J. Gould” was written over fifteen years ago. People can change a lot over fifteen years! It’s plausible that Yudkowsky now regrets that post as failing to live up his current, improved conception of epistemic conduct. It would certainly be possible to write a similar post that complied with the defer-personal-attacks rule. Instead of “Beware of Steven J. Gould”, it could be titled “Evolutionary Complexity Is Not a Random Walk” or (to get the criticism-target’s name in the title) “Contra Gould on Evolutionary Complexity”, and rebut the Full House argument first, only remarking on Gould’s untrustworthiness as an afterthought, rather than centering it as the main thesis and putting it in the title.

But would that post be better at communicating what the author really had to say? The Yudkowsky of 2007 wasn’t writing to an audience that already believed Gould’s ideas about the complexity of evolved organisms and needed to be set straight on that technical point. Rather, he specifically wanted to warn his audience not to trust Stephen Jay Gould in general. An alleged “basic rule of epistemic conduct” that prevented him from focusing on his actual concern would be obfuscatory, not clarifying.

Perhaps there’s something to be said for norms that call for obfuscation in certain circumstances. If you don’t trust your psychiatric patients not to harm themselves, take away their pens and shoelaces; if you don’t trust your all-too-human forum participants not to succumb to ad hominem and name-calling, don’t let them talk about people’s motivations at all.

What is less defensible is meta-obfuscation about which norms achieve their function via obfuscation. If Yudkowsky had merely condemned Omnizoid’s post as violating norms of Less Wrong or the Effective Altruism Forum, I would not perceive an interest in replying; the internal politics of someone’s internet fanfiction cult are of little interest to me (except insofar as I am unfortunate or foolish enough to still live here).

But Yudkowsky specifically used the phrases “epistemic conduct” or “epistemic good conduct”. (Three times in one paragraph!) The word epistemic is not just a verbal tic you can throw in front of anything to indicate approval of your internet cult. It means something. (“Of or relating to cognition or knowledge, its scope, or how it is acquired.”)

I think someone who wants to enforce an alleged “basic rule of epistemic conduct” of the form “if you’ve got to [say X] [...] [do so] after presenting your [...] points that support [X]” should be able to explain why such a rule systematically produces maps that reflect the territory when X happens to be “negative assessments of an author’s character or authority” (what are called “personal attacks”), but not for other values of X (given how commonplace it is to make a thesis statement before laying out all the supporting arguments).

I don’t think Yudkowsky can do that, because I don’t think any such rule exists. I think the actual rules of basic epistemic conduct are things like “You can’t make a fixed conclusion become more correct by arguing for it” or “More complicated hypotheses are less probable ex ante and therefore require more evidence to single out for consideration”—and that the actual rules don’t themselves dictate any particular rhetorical style.

Having written this post criticizing Yudkowsky’s claim about what constitutes a basic rule of epistemic conduct, I suppose I might as well note in passing that if you find my arguments convincing, you should to some degree be less inclined to defer to Yudkowsky as an expert on human rationality. But I don’t think that should be the main thing readers should take away from this post in the absence of answers to the obvious followup questions: Yudkowsky? Who is that? Why should you care?

I don’t know, either.


  1. ↩︎

    I say only “potentially” falsified, because “Beware Steven J. Gould” was not included in a later collection of Yudkowsky’s writings from this period and is not part of a “sequence” in the current lesswrong.com software; one could perhaps argue on those grounds that it should not be construed as part of “the Less Wrong sequences” for the purposes of evaluating the claim that “You will find every post of the Less Wrong sequences written the same way.”