Futility Illusions

…or the it doesn’t make a difference anyway fallacy.

A picture of an indecisive cow that is attached to a tiny tree with a thin rope. The cow should be easily able to escape the tree, but appears to not believe in its ability to do so.

Improving Productivity is Futile

I once had a coaching call on some generic productivity topic along the lines of “I’m not getting done as much as I’d like to”. My hope was that we might identify ways for me to become more productive and get more done. The coach, however, very quickly narrowed in on figuring out what I typically work on in order to eliminate the least valuable things – also a good idea for sure, but this approach seemed a bit disappointing to me. I had the impression I already had a good selection of high-value things, and really only wanted to do more of them, rather than dropping some in favor of others. When I asked about this, he seemed to have a strong conviction that “getting more done” is futile – you can’t just do that, or if so, then not sustainably. Instead, you should always focus on doing the right things.

Now, I think there is some wisdom in that. And perhaps it even was a good strategy in my case. However, I still believe there’s a bit of a fallacy involved in his assessment: the assumption that some malleable quantity is somehow unimprovable. That how much I can get done is somehow constant, or that trying to change it is not worth the effort.

It’s what I like to call futility illusions, and I think they’re pretty common.

To name two more examples that I’ve encountered before:

  • Some people seem to think that improving your sleep quality so that less sleep is sufficient for an equal amount of rest (compared to wherever you’re at currently) is a fool’s errand and you can’t just “sleep more effectively”.

  • I’ve had debates about group retention, where others argued that “you can’t just improve retention”, because it’s out of your hands. It’s in the responsibility[1] of the people who join to decide whether to come back. If they don’t like your group, they won’t return, and you shouldn’t try to “force” anything.

Futility Everywhere?

The recurring theme in all these examples is that someone has a strong belief that some particular quantity is basically fixed and you can’t realistically improve it.

But the assumption that there’s no way to improve upon a given quantity is often a rather bold one, because it implies one of two things:

  1. Immutability: Either, the given property is truly entirely fixed, and cannot be changed at all.

  2. Optimality: Or, the property can be changed, but only negatively, i.e. it is already so close to the optimum (or marginal improvements are so costly) that no further improvements are practical.

Condition 1 seems to be false in almost all cases of interest. Looking at our three examples, we can at least always find obvious ways to make them worse:

  1. It’s easy to reduce productivity, e.g. I could watch Youtube all day instead of doing anything productive.

  2. It’s similarly easy to reduce sleep quality by turning my heating to the max, getting a worse mattress, leaving my lights on, playing loud rock music throughout the night, drinking an energy drink late at night and releasing a horde of ants into my bed. I’m sure people would agree that this would, sustainably even, reduce my sleep quality, and is not something I would eventually get used to and get similarly good sleep as I do now.

  3. If we start entirely ignoring newcomers or even insulting them, I’m sure we could decrease the retention rate to practically 0.

So, clearly, none of these metrics are immutable, the quantities can be changed. This leaves the second condition: if they can be changed, and yet you assume they can’t be improved, then this means that they have some upper bound, and we are very close to it (or that we’re so deep into diminishing return land already that further improvements are not practically achievable). This can particularly make sense in cases where we have already invested a lot of effort into something. But if we haven’t – as is the case to varying degrees in these examples – then it would, typically, be really surprising if we just ended up close to the optimum by default.

The optimality assumption at least often appears more reasonable than immutability. But typically, when I encounter futility arguments in the wild, the people making them don’t inquire about optimality beforehand, they seem to just assume it for whatever reason.

Let’s take the group retention example: I have no actual data on this, but I’m sure retention rate of, say, rationality meetup groups varies a lot. Let’s, for instance, suppose it’s a skewed distribution that for almost all groups ranges from 5% to 50%, with the average around 20% or so (for some sensible operationalization of “retention”).

Fake data that may or may not help me make a point.

And maybe the organizers from the lower half of this distribution tend to complain to their peers about their group’s retention rate being frustratingly low. This easily leads to the impression that “retention is bad everywhere”, because all people hear from other group organizers are complaints about low retention. But this not only involves some reporting bias – groups with better retention rates usually just don’t talk about it much, as it’s not a problem for them. What’s more, even among those that do complain, the retention rate may still vary by a factor of more than 3! So, in this case, it seems very likely to me that there are ways to improve retention for many such groups, it’s just not immediately obvious how to do so.[2]

Futility is Rare

Well, some things may be truly futile (or optimal!) after all, at least given our current state of knowledge and technology, such as:

  • An adult person’s height

  • Improving a person’s reaction time beyond 200ms or so

  • Tic-tac-toe performance

  • Rock-paper-scissors performance

  • (Lossless) data compression

  • Solar panel efficiency

  • LED efficiency

But on the other end, there are many things that may seem futile at first, but upon closer inspection probably don’t fulfill the conditions of immutability or optimality:

  • Education

  • International cooperation

  • Aging and human lifespan

  • The number of colds you get in a year

  • IQ

  • Treatment of mental illnesses

  • Organizational structures

  • Information flow within a group or organization

  • Group decision-making

  • Urban traffic flow

  • Wild animal suffering

It’s probably the case that futility is usually earned: many things naturally become more futile the more maxed out they get, such as in the case of data compression or solar panel efficiency – after decades of work and innovation, we’re possibly closing in on fundamental limits to a degree that further major improvements seem unlikely (or impossible).

For individuals, it naturally differs how much effort they’ve already invested into their sleep, or productivity, or expected lifespan. But if you haven’t put meaningful effort into some malleable quantity, then it’s often unlikely you just happen to be close to the optimum by default.

Putting it all Together

Perhaps a reasonable “5-seconds version” of this post is something like:

Whenever you suspect (or somebody claims) that some desirable property cannot be improved further, think briefly about a) whether that property can be changed at all—e.g. can you think of easy ways to make it worse? - and b) if it can be changed, is there really reason to assume it’s already close to its optimum? If it can be changed and is not close to its optimum, then arguing about its futility may be misguided.

There are definitely cases where further improvements are futile or not worth the effort. But before cutting conversations short or dismissing ideas due to assumptions of futility, we should make sure we’re not just falling for a fallacy.

  1. ^

    While at it, I’d like to mention that “responsibility” is a tricky term anyway.

  2. ^

    Yeah, okay, maybe my argument hinges a bit on fake data. But come on, do you really think retention is about equal in every group, and the group’s culture and behavior have no meaningful influence?