Hedge drift and advanced motte-and-bailey

Motte and bailey is a technique by which one protects an interesting but hard-to-defend view by making it similar to a less interesting but more defensible position. Whenever the more interesting position—the bailey—is attacked—one retreats to the more defensible one—the motte -, but when the attackers are gone, one expands again to the bailey.

In that case, one and the same person switches between two interpretations of the original claim. Here, I rather want to focus on situations where different people make different interpretations of the original claim. The originator of the claim adds a number of caveats and hedges to their claim, which makes it more defensible, but less striking and sometimes also less interesting.* When others refer to the same claim, the caveats and hedges gradually disappear, however, making it more and more motte-like.

A salient example of this is that scientific claims (particularly in messy fields like psychology and economics) often come with a number of caveats and hedges, which tend to get lost when re-told. This is especially so when media writes about these claims, but even other scientists often fail to properly transmit all the hedges and caveats that come with them.

Since this happens over and over again, people probably do expect their hedges to drift to some extent. Indeed, it would not surprise me if some people actually want hedge drift to occur. Such a strategy effectively amounts to a more effective, because less observable, version of the motte-and-bailey-strategy. Rather than switching back and forth between the motte and the bailey—something which is at least moderately observable, and also usually relies on some amount of vagueness, which is undesirable—you let others spread the bailey version of your claim, whilst you sit safe in the motte. This way, you get what you want—the spread of the bailey version—in a much safer way.

Even when people don’t use this strategy intentionally, you could argue that they should expect hedge drift, and that omitting to take action against it is, if not ouright intellectually dishonest, then at least approaching that. This argument would rest on the consequentialist notion that if you have strong reasons to believe that some negative event will occur, and you could prevent it from happening by fairly simple means, then you have an obligation to do so. I certainly do think that scientists should do more to prevent their views from being garbled via hedge drift.

Another way of expressing all this is by saying that when including hedging or caveats, scientists often seem to seek plausible deniability (“I included these hedges; it’s not my fault if they were misinterpreted”). They don’t actually try to prevent their claims from being misunderstood.

What concrete steps could one then take to prevent hedge-drift? Here are some suggestions. I am sure there are many more.

  1. Many authors use eye-catching, hedge-free titles and/​or abstracts, and then only include hedges in the paper itself. This is a recipe for hedge-drift and should be avoided.

  2. Make abundantly clear, preferably in the abstract, just how dependent the conclusions are on keys and assumptions. Say this not in a way that enables you to claim plausible deniability in case someone misinterprets you, but in a way that actually reduces the risk of hedge-drift as much as possible.

  3. Explicitly caution against hedge drift, using that term or a similar one, in the abstract of the paper.

* Edited 25 2016. By hedges and caveats I mean terms like “somewhat” (“x reduces y somewhat”), “slightly”, etc, as well as modelling assumptions without which the conclusions don’t follow and qualifications regarding domains in which the thesis don’t hold.