Do we underuse the genetic heuristic?

Someone, say Anna, has uttered a certain proposition P, say “Betty is stupid”, and we want to evaluate whether it is true or not. We can do this by investigating P directly—i.e. we disregard the fact that Anna has said that Betty is stupid, but look only at what we know about Betty’s behaviour (and possibly, we try to find out more about it). Alternatively, we can do this indirectly, by evaluating Anna’s credibility with respect to P. If we know, for instance, that Anna is in general very reliable, then we are likely to infer that Betty is indeed stupid, but if we know that Anna hates Betty and that she frequently bases her beliefs on emotion, we are not.

The latter kind of arguments are called ad hominem arguments, or, in Hal Finney’s apt phrase, the genetic heuristic (I’m going to use these terms interchangeably here). They are often criticized, not the least within analytical philosophy, where the traditional view is that they are more often than not fallacious. Certainly the genetic heuristic is often applied in fallacious ways, some of which are pointed out in Yudkowsky’s article on the topic. Moreover, it seems reasonable to assume that such fallacies would be much more common if they weren’t so frequently pointed out by people (accussations of ad hominem fallacies are common in all sorts of debates). No doubt, we are biologically disposed to attack the person rather than what he is saying on irrelevant grounds.

The genetic heuristic is not always fallacious, though. If a reputable scientist tells us that P is true, where P falls under her domain, then we have reason to believe that P is true. Similarly, if we know that Liza is a compulsive liar, then we have reason to believe that P is false if Liza has said P.

We see that genetic reasoning can be both positive and negative—i.e. it can both be used to confirm, and to disconfirm, P. It should also be noted that negative genetic arguments typically only make sense if we assume that we generally put trust into what other people say—i.e. that we use a genetic argument to the effect that the fact that S having said P makes P more likely to be true. If people don’t use such arguments, but only look at P directly to evaluate whether it is true or not, it is unclear what importance arguments that throw doubt on the reliability of S have, since it that case, knowing whether S is reliable or not shouldn’t affect our belief in P.

Three kinds of genetic arguments
We can differentiate between three kinds of genetic arguments (this list is not intended to be exhaustive):
1) Caren is unreliable. Hence we disregard anything she says (e.g. since Caren is three years old).

2) David says P, and given what we know about P and about David (especially of David’s knowledge and attitute to P), we have reason to believe that David is not reliable with respect to P. (For instance, P might be some complicated idea in theoretical physics, and we know that David greatly overestimates his knowledge of theoretical physics.)

3) Eric’s beliefs on a certain topic has a certain pattern. Given what we know of Eric’s beliefs and preferences, this pattern is best explained on the hypothesis that he uses some non-rational heuristic (e.g. wishful thinking). Hence we infer that Eric beliefs on this topic are not justified. (E.g. Eric is asked to order different people with respect to friendliness, beauty and intelligence. Eric orders people very similarly on all these criteria—a striking pattern that is best explained, given what we now know of human psychology, by the halo effect.)

(Possibly 3) could be reduced to 2) but the prototypical instances of these categories are sufficiently different to justify listing them as separate.)

Now I would like to put forward the hypothesis that we underuse the genetic heuristic, possibly to quite a great degree. I’m not completely sure of this, though, which is part of the reason for why I write this post: I’m curious to see what you think. In any case, here is how I’m thinking.

Direct arguments for the genetic heuristic

My first three arguments are direct arguments purporting to show that genetic arguments are extremely useful.

a) The differences in reliability between different people are vast (as I discuss here; Kaj Sotala gave some interesting data which backed up my speculations). Not only are the differences between, e.g. Steven Pinker and uneducated people vast, but also, and more interestingly, so are the difference between Steven Pinker and an average academic. If this is true, it makes sense to think that P is more probable conditional on Pinker having said it, compared to if some average academic in his field have said P. But also, and more importantly, it makes sense to read whatever Pinker has written. The main difference between Pinker and the average academic does not concern the probabilities that what they say is true, but in the strikingness of what they are saying. Smart academics say interesting things, and hence it makes sense to read whatever they write, whereas not-so-smart academics generally say dull things. If this is true, then it definitely makes sense to keep a good track of who’s reliable and interesting (within a certain area or all-in-all), and who is not.

b) Psychologists have during the last decades amassed a lot of knowledege of different psychological mechanisms such as the halo effect, the IKEA effect, the just world hypothesis, etc. This knowledge was not previously available (even though people did have a hunch of some of them, as pointed out, e.g. by Daniel Kahnemann in Thinking Fast and Slow). This knowledge gives us a formidable tool for hypothesizing that others’ (and, indeed, our own), beliefs are the result of unreliable processes. For instance, there are, I’d say, lots of patterns of beliefs which are suspicious in the same way Eric’s are suspicious, and which also are best explained by reference to some non-rational psychological mechanism. (I think a lot of the posts on this site could be seen in these terms—as genetic arguments against certain beliefs or patterns of beliefs, which are based on our knowledge of different psychological mechanisms. I haven’t seen anyone phrase this in terms of the genetic heuristic, though.)

c) As mentioned in the first paragraph, those who only use direct arguments against P disregard some information—i.e. the information that Betty has uttered P. It’s a general principle in the philosophy of science and Bayesian reasoning that you should use all the available evidence and not disregard anything unless you have special reasons for doing so. Of course, there might be such reasons, but the burden of proof seems to be on those arguing that we should disregard it.

Genetic arguments for the genetic heuristic

My next arguments are genetic arguments (well I should use genetic arguments when arguing for the usefulness of genetic arguments, shouldn’t I?) intended to show why we fail to see how useful they are. Now it should be pointed out that I think that we do use them on a massive scale—even though that’s too seldom pointed out (and hence it is important to do so). My main point is, however, that we don’t do it enough.

d) There are several psychological mechanisms that block us from seeing the scale of the usefulness of the genetic heuristic. For instance, we have a tendency to “believe everything we read/​are told”. Hence it would seem that we do not disregard what poor reasoners (whose statements we shouldn’t believe) say to a sufficient degree. Also, there is, as pointed out in my previous post, the Dunning-Kruger effect which says that incompetent people overestimate their level of competence massively, while competent people underestimate their level of competence. This makes the levels of competence to look more similar than they actually are. Also, it is just generally hard to assess reasoning skills, as frequently pointed out here, and in the absence of reliable knowledge people often go for the simple and egalitarian hypothesis that people are roughly equal (I think the Dunning-Kruger effect is partly due to something like this).

It could be argued that there is at least one other important mechanism that plays in the other direction, namely the fundamental attribution error (i.e. we explain others’ actions by reference to their character rather than to situational factors). This could lead us to explain poor reasoning by lack of capability, even though the true cause is some situational factor such as fatigue. Now even though you sometimes do see this, my experience is that it is not as common as one would think. It would be interesting to see your take on this.

Of course people do often classify people who actually are quite reliable and interesting as stupid based on some irrelevant factor, and then use the genetic heuristic to disregard whatever they say. This does not imply that the genetic heuristic is generally useless, though—if you really are good at tracking down reliable and interesting people, it is, in my mind, a wonderful weapon. It does imply that we should be really careful when we classify people, though. Also, it’s of course true that if you are absolutely useless at picking out strong reasoners, then you’d better not use the genetic heuristic but have to stick to direct arguments.

e) Many social institutions are set up in a way which hides the extreme differences in capability between different people (this is also pointed out in my previous post). Professors are paid roughly the same, are given roughly the same speech time in seminars, etc, regardless of their competence. This is partly due to the psychological mechanisms that make us believe people are more cognitively equally than they are, but it also reinforces this idea. How could the differences between different academics be so vast, given that they are treated in roughly the same way by society? We are, as always, impressed by what is immediately visible and have differences understanding that under the surface huge differences in capability are hidden.

f) Another reason for why these social institutions are set up in this way is egalitarianism: we have a political belief that people should be roughly equally treated, and letting the best professors talk all the time is not compatible with that. This egalitarianism also is, I think, an obstacle to us seeing the vast differences in capability. We engage in wishful thinking to the effect that talent is more equally distributed than it is.

g) There are strong social norms against giving ad hominem arguments to someone else’s face. These norms are not entirely unjustified: ad hominem arguments do have a tendency to make debates derail into quarrels. In any case, this makes the genetic heuristic invisible, and, again, people tend to go by what they see and hear, so if they don’t hear any ad hominem arguments, they’ll use them less. I use the genetic heuristic much more often when I think than when I speak and since I suspect that others do likewise, its visibility doesn’t match its use nor its usefulness. (More on this below).

These social norms are also partly due to the history of analytic philosophy. Analytical philosophers were traditionally strongly opposed to ad hominem arguments. This had partly to do with their strong opposition to “psychologism”—a rather vague term which refers to different uses of psychology in philosophy and logic. Genetic arguments typically speculate that this or that belief was due to some non-rational psychological mechanism, and hence it is easy to see how someone who’d like to banish psychology from philosophy (under which argumentation theory was supposed to fall) would be opposed to such arguments.*

h) Unlike direct arguments, genetic arguments can be seen as “embarrasing”, in a sense. Starting to question why others, or I myself came to have a certain belief is a rather personal business. (This is of course an important reason why people get upset when someone gives an ad homimen argument against them.) Most people don’t want to start question whether they believe in this or that simply because it’s in their material interest, for if that turned out to be true, they’d come out as selfish. It seems to me that people who underuse genetic reasoning are generally poor not only at metacognition (thinking about one’s own thinking) on a narrow construal -i.e. on thinking of what biases they suffer from—but also are bad at analyzing their own personalities as a whole. If that speculation is true, it incidates that genetic reasoning has an empathic and emotional component that direct reasoning typically lack. I think I’ve observed many people who are really smart at direct reasoning, but who completely fail at genetic reasoning (e.g. they treat arguments coming from incompetent people as on par with those from competent people). These people tend to lack empathy (i.e. they don’t understand other people—or themselves, I would guess).

i) Another important and related reason for why we underuse ad hominem arguments is, I think, that we wish to avoid negative emotions, and ad hominem reasoning often does give rise to negative feelings (we think we’re being judgy). This goes especially for the kind of ad hominem reasoning that classifies people into smart/​dumb people in general. Most people have rather egalitarian views and don’t like thinking those kinds of thoughts. Indeed when I discuss this idea with people they are visibly uncomfortable with it even though they admit that there is some truth to it. We often avoid thinking about ideas that we’re not emotionally comfortable with.

j) Another reason is mostly relevant to the third genetic heuristic and has to do with the fact that many of these patterns might be so complex as to be hard to spot. This is definitely so, but I’m convinced that with training you could be much better at spotting these patterns than most people are today. As stated, ad hominem-arguments aren’t held in high regard today, which makes people not so inclined to look for them. In groups where such arguments are seen as important—such as Marxists and Freudians—people come up with intricate ad hominem arguments all the time. True, these are generally invalid, as they postulate psychological mechanisms that simply aren’t there, but there’s no reason to believe that you couldn’t come up with equally complex ad hominem-arguments that track real psychological mechanisms.

Pragmatic considerations

It is true, as many have pointed out, that since genetic reasoning are bound to upset, we need to proceed cautiously if we’re going to use it against someone we’re discussing with. However, there are many situations where the object of our genetic reasoning doesn’t know that we’re using it, and hence can’t get upset. For instance, I’m using it all the time when I’m thinking for myself, and this obviously doesn’t upset anyone. Likewise, if I’m discussing someone’s views—say Karl Popper’s—with a friend and I use genetic arguments against Popper’s views, that’s unlikely to upset him.

Also, given the ubiquity of wishful thinking, the halo effect, etc, it seems to me that reasonable people shouldn’t get too upset if others hypothesize that they have fallen prey to these biases if the patterns of their beliefs suggest this might be so (such as they do in the case of Eric). Indeed, ideally they should anticipate such hypotheses, or objections, by explicitly showing that the patterns that seem to indicate that they have fallen prey to some bias actually do not do that. At the very least, they should acknowledge that these patterns are bound to raise their discussion partners’ suspicoun. I think it would be a great step forward if our debating culture would change so that this would become standard practice.

In general, it seems to me that we pay too much heed to the arguments given by people who are not actually persuaded by those arguments, but rather have decided what to believe beforehand, and then simply pick whatever arguments support their view (e.g. doctors’ arguments for why doctors should be better paid). It is true that such people might sometimes actually come up with good arguments or evidence for their position, but in general their arguments tend to be poor. I certainly often just turn off when I hear that someone is arguing in this way: I have a limited amount of time, and prioritize to listen to people who are genuinely interested in the truth for its own sake.

Another factor that should be considered is that it is true that genetic reasoning is kind of judgy, elitistic and negative to a certain extent. This is not unproblematic: I consider it important to be generally optimistic and positive, not the least for your own sake. I’m not really sure what to conclude from this, other than that I think genetic reasoning is an indispensable tool in the rationalist’s toolbox, and that you thus have to use it frequently even if it would have an emotional cost attached to it.

In genetic reasoning, you treat what is being said—P - as a “black box”, more or less: you don’t try to analyze P or look at how justified P is directly. Instead, you look at the process of how someone came to believe P. This is obviously especially useful when it’s hard or time-consuming to assess P directly, while comparatively easy to assess the reliability of the process that gave rise to the belief in P. I’d say there are many such situations. To take but one example, consider a certain academic discipline—call it “modernpostism”. We don’t know much about the content of modernpostism, since modernpostists use terminology that is hard to penetrate for outsiders. We know, however, how the bigshots of modernpostism tend to behave and think in other areas. On the basis of this, we have inferred that they’re intellectually dishonest, prone to all sorts of irrational thinking, and simply not very smart. On the basis of this, we infer that they probably have no justification for what they’re saying in their professional life either. (More examples of useful ad hominem arguments are very welcome.)

Psychology is all the time uncovering new data relevant for ad hominem-reasoning—data not only on cognitive biases but also on thought-styles, personality psychology, etc. Indeed, it might even be that brain-scanning could be used for these purposes in the future. In principle it should be possible to do a brainscan on the likes of Zizek, Derrida or Foucault, observe that there is nothing much going on in the relevant areas of the brain, and conclude that what they say is indeed rubbish. That would be a glorious victory of cold science over empty bullshit indeed...

I clearly need to learn to write shorter.

* “Anti-psychologism” is a rather absurd position, to my mind. Even though there have of course been misapplications of psychological knowledge in philosophy, a blanket prohibition of the use of psychological knowledge—knowledge of how people typically do reason—in philosophy—which is, at least in part, the study of how we ought to reason—seems to me to be quiet absurd. For an interesting sociological explanation of why this idea became so widespread, see Martin Kusch’s Psychologism: A Case Study in the Sociology of Philosophical Knowledgein effect a genetic argument against anti-psychologism...

Another reason was that analytical philosophers revolted against the rather crude genetic arguments often given by Marxists (“you only say so because you’re bourgeois”) and Freudians (“you only say so because you’re sexually repressed”). Popper’s name especially comes to mind here. The problem with their ad hominem arguments was not so much that they were ad hominem, though, but that they were based on flawed theories of how our mind works. We now know much better—the psychological mechanisms discussed here have been validated in countless experiments—and should make use of that knowledge.

There are also other reasons, such as early analytic philosophy’s much too “individualistic” picture of human knowledge (a picture which I think comes naturally to us for biological reasons, but which also is an important aspect of Enlightenment thought, starting perhaps with Descartes). They simply underestimated the degree to which we rely on trusting other people in modern society (something discussed, e.g. by Hilary Putnam). I will come back to this theme in a later post but will not go into it further now.