Actually, “personal attacks after object-level arguments” is a pretty good rule of epistemic conduct

Background: this post is a response to a recent post by @Zack_M_Davis, which is itself a response to a comment on another post. I intentionally wrote this post in a way that tries to decontextualize it somewhat from the original comment and its author, but without being at least a bit familiar with all of the context, it’s probably not very intelligible or interesting.

This post started as a comment on Zack’s post, but I am spinning it out into own post because I think it has broader applicability and because I am interested in hearing thoughts and responses from the readers and upvoters of that post, moreso than from its author.

I originally responded to Zack’s post with a comment here, but on further reflection, I want to strengthen and clarify some claims I alluded to in that comment.

I see Zack’s post as making two main claims. I have medium confidence that both of the claims are false, and high confidence that they are not well-supported by the text in the post.

Claim one (paraphrased): “personal attacks (alt. negative character assessments) should only come after making object-level arguments” isn’t actually a {good,useful,true} rule of epistemic conduct.

I see the primary justifications given for this claim in the text as (paraphrased):

  • The person claiming this is a rule should be able to explain why they think such a rule would systematically lead to more accurate beliefs (“maps that reflect the territory”).

  • In fact, no such valid explanation is likely to exist, because following such a rule would not systematically lead to the formation of more accurate beliefs.

The issue with the first justification is that no one has actually claimed that the existence of such a rule is obvious or self-evident. Publicly holding a non-obvious belief does not obligate the holder to publicly justify that belief to the satisfaction of the author.

Perhaps a more charitable interpretation of the author’s words (“I think [the claimant]… should be able to explain why such a rule systematically produces maps that reflect the territory...”) is that the absence of a satisfactory explanation from the claimant is Bayesian evidence that no such explanation exists. But if that’s what the author actually meant, then they should have said so more plainly, and acknowledged that this is often pretty weak evidence.

(Relevant background context: Zack has previously argued at great length that this particular claimant’s failure to adequately respond publicly about another matter is Bayesian evidence about a variety of important inferences related to that claimant’s epistemics.)

The issue with the second justification is that a valid explanation for why this is a good rule very likely does exist; I gave one that I find plausible at the end of my first comment:

If more people read the beginning of an argument than the end, putting the personal attacks at the beginning will predictably lead to more people seeing the attacks than the arguments that support them. Even if such readers are not consciously convinced by attacks without argument, it seems implausible that their beliefs or impressions will not be moved at all.

Another possible explanation (for which the source of inspiration Raemon gives in the comments): one interpretation of the proposed rule is that it is essentially just a restatement of avoiding the logical fallacy of Bulverism; if that interpretation is accepted, what remains to be shown is that avoiding Bulverism in particular, or even avoiding logical fallacies in general, is likely to lead to more accurate beliefs in both readers and authors, independent of the truth values of the specific claims being made. This seems plausible: one of the reasons for naming and describing particular logical fallacies in the first place is that avoiding them makes it harder to write (or even think!) untrue things and easier to write true things.

Note, that I am not claiming that either of these explanations is necessarily definitely correct, just that they are plausible-to-likely empirically-testable claims for why the proposed rule (in Zack’s own words) “arise[s] from deep object-level principles of normative reasoning” rather than being a guideline due to “mere taste, politeness, or adaptation to local circumstances”.

Things like “readers often don’t read to the end of the article”, and “readers are unconsciously influenced by what they read”, and “writers are unconsciously influenced by the structure of their own writing” are empirical claims about how the human brain works, which could in principle be tested. I do not claim to have such experimental evidence in hand, but I know which way I would bet on the experimental results, if someone were actually running the experiment.

Supposing you accept such hypotheses as likely, I contend that the term “epistemic conduct” accurately describes the purpose of the proposed rule, according the ordinary and widely-understood meaning of those words.

Sidebar: perhaps this rule in particular, or other rules which Zack names as actual rules of epistemic conduct, will not apply to some hypothetical ideal agent, or even to all non-human minds. Maybe soon we will be naming and describing rules of reasoning which apply to LLMs but are inapplicable to humans, e.g. “always write out key facts as a list of individual sentences before writing a conclusion, in order to make your KQV vectors larger and longer, which is known to improve the accuracy of your conclusions”. For now though, it seems perfectly reasonable under the ordinary meaning of the words to call any rule which seems like it plausibly applies to reasoning processes in most or all of the actual minds we know about a rule of “epistemic conduct”.

(My guess is a rule about not opening with personal attacks when you’re making controversial object level claims will actually apply to LLMs as well as humans, though probably for very different underlying reasons. In my view, that makes it a particularly good candidate for declaring it a rule of epistemic conduct, rather than merely a good rule of conduct among humans.)

The fact that the original post doesn’t really consider and reject even the most obvious possible candidate explanations as invalid or unlikely on empirical or theoretical grounds looks to me like a pretty glaring omission. Following my own advice above, I state plainly that I think such an omission is strong evidence that the claim is unsupported by the text, weak evidence that the claim is false, and make no further claims about what the author “should” do.


Claim two (paraphrased): the Gould post is an example of a violation of the purported rule.

(Zack dutifully and correctly notes that this claim is not actually relevant to the validity of the first claim, nor is it even an accusation of hypocrisy. Despite this, he spends some time and energy on this point, so I will attempt to refute it here and use that refutation as a frame to make my own point about decontextualization.)

The main justification given for this claim is that, sufficiently decontextualized, the arguments are similar in structure to another post which is more clearly a central example of a violation of the purported rule.

While decontextualizing is often a useful and clarifying exercise, it is not a universally valid, truth-preserving operation. In this case, the rule under consideration comes with an implicit context about when and how the rule is meant to be applied.

Zack correctly notes that, for this rule in particular to make sense as a rule of epistemic conduct, it should be applicable independent of at least the truth values of the claims being made, and perhaps some other context, such as local discourse norms. Therefore, decontextualizing from the truth value of the object-level claims being made is a valid step. However, removing other context is not necessarily valid, and indeed in the two cases being compared, the relevant context outside of the truth values of the object-level claims is in fact quite important.

Why, and what context am I referring to? I gave one such explanation in my own comment, essentially, it matters who is being attacked in front of what audience, and how likely anyone is to feel personally affronted by the negative character assessments. In the Gould post, Gould is of course not likely to feel any such affront, nor is anyone in the target audience on Gould’s behalf. The result is that readers are likely to be able to dispute the character assessments without distraction, and judging from the comments of that post, this indeed appears to have happened: many people did dispute the character assessments as incorrect, but the discussion was not derailed by accusations or discussion about the author’s own motivations, e.g. that he was simply grinding an axe against Gould for unseen personal reasons.

An alternative, perhaps better explanation follows directly from the advice given by Villiam in this comment:

This is hindsight, but next time instead of writing “I think Eliezer is often wrong about X, Y, Z” perhaps you should first write three independent articles “my opinion on X”, “my opinion on Y”, my opinion on Z”, and then one of two things will happen—if people agree with you on X, Y, Z, then it makes sense to write the article “I think Eliezer is often wrong” and use these three articles as evidence… or if people disagree with you on X, Y, Z, then it doesn’t really make sense to argue to that audience that Eliezer is wrong about that, it they clearly think that he actually is right about X, Y, Z. If you want to win this battle, you must first win the battles about X, Y, Z individually.

(Shortly, don’t argue two controversial things at the same time. Either make the article about X, Y, Z, or about Eliezer’s overconfidence and fallibility. An argument “Eliezer is wrong because he says things you agree with” will not get a lot of support.)

Note that no such similar advice need be given to the author of the Gould post, even if the claims about Gould in that post are false! The author gave his views on the object level prior to writing the Gould post, and had those views received mostly positively and uncontroversially.

Again note that this advice applies independently of the truth values of the claims in the posts in question, and is plausibly also independent of any local argument norms—lots of commenters thought the claims about Gould were wrong, to varying degrees, and the norms of 2007 LW were pretty different from the norms of the 2023 EAF.

When the decontextualization operation is applied properly, rather than improperly (by blindly removing all context), it becomes apparent that the proposed rule is simply inapplicable in the case of the Gould post, and was therefore not actually violated. This looks like the rule functioning as intended: the reasoning ability of the author and readers of the Gould post (which is what the rule is meant to protect when it does apply) were not noticeably impaired by the negative character assessments within, nor by their ordering.

(This is also why Zack’s example about gummed stamps falls flat: a post about licking stamps is another context in which the rule is inapplicable, rather than wrong or not useful.)

I anticipate a possible objection to this section that the applicable context for when the proposed rule applies is not explicit, legible or stated by me or the original claimant anywhere. This is true, and indeed the fact that no one has provided a clear and explicit statement of exactly in which contexts the rule is supposed to apply and how is weak Bayesian evidence that no such crisp statement exists. Feel free to update on that, though consider also that you might learn more by thinking for yourself about what contexts are relevant and why, and seeing if you can come up with a crisp statement of applicability on your own.


A final remark on the choice of Zack’s phrasing, which is not central to the claims above but which I think is key to how the claims were received:

...someone who wants to enforce an alleged “basic rule of epistemic conduct” of the form...

It is implied but not stated directly in the text that the “someone” here is Eliezer; and that by writing the comment, he (Eliezer) was attempting to “enforce a rule”.

I think this is an unjustified and incorrect insinuation about Eliezer’s internal motivations for leaving the comment. The comment was not necessarily an attempt to enforce a rule at all. I read the comment as an attempt to explain to the upvoters of the Omnizoid post why they erred in upvoting it, and to help them avoid making similar mistakes in the future. In the course of that explanation, the comment stated (but neither explained nor attempted to enforce) a rule of epistemic conduct.

After Eliezer posted the comment in question, the votes on the EAF version of the Omnizoid post swung pretty dramatically, which I take as evidence that my interpretation of the comment’s intended purpose is more likely than Zack’s, and that the comment was successful in that purpose.