In Defence of Optimizing Routine Tasks

People often quantify whether an optimization to a routine task is worth it by looking at the net amount of time saved. This mindset is exemplified by this oft-cited xkcd:

https://​​xkcd.com/​​1205/​​

However, I think this doesn’t really give a full picture of the best way to decide whether to make a task more efficient. There are many other factors that may weigh more than the raw time cost of something.

First off, for tasks that create friction inside important loops, this could disincentivize you from going around the loop, and make the feedback loop significantly more frustrating. For example, creating short aliases for common shell commands makes me much more productive because it decreases the mental cost I assign to running them and makes it easier to overcome the “activation energy”, despite this probably not saving me more than an hour or two in a year, since typing a few extra characters realistically doesn’t take that long. Of course, setting up aliases only takes a few minutes, but even if it took hours I still think it would work out to be worth it overall.

If the task inside the loop gets longer than even just a few seconds, this also adds significant context switching costs. For example, if I’m writing code that needs some simple utilities, the value of having those utilities already implemented and easily usable is worth far more than the reimplementation time it saves. If I need a utility function that takes 5 minutes to reimplement, by the time I finish implementing the function I’ve already partially lost track of the context in the original place where I needed the function and need to spend some more time and effort getting back into the flow of the original code I was writing. (This is also a reason why building up abstractions is so powerful—even more powerful than the mere code reuse benefits would imply—it lets you keep the high level context in short term memory.) In practice, this means that often building a set of problem specific tools first to abstract away annoying details and then having those tools to help you makes solving the problem feel so much more tactile and tractable and fun than the alternative. This seems obvious in retrospect but I still find myself not building tools when I should be sometimes.

If you’re especially unlucky, the subtask itself could turn out to be harder than just a 5 minute task, and then you have to recurse another layer deeper, and after a few iterations of this you end up having totally forgotten what you were originally trying to do — the extremely frustrating “falling down the rabbit hole”/​”yak shaving” phenomenon. Needless to say, this is really bad for productivity (and mental health). Having a solid foundation that behaves how you expect it to 99% of the time helps avoid this issue. Even just knowing that the probability of falling down a rabbit hole is low seems to help a lot with the activation energy I need to overcome to actually go implement something.

Even if the thing you’re optimizing isn’t in any critical loop, there’s also the cognitive overhead of having to keep track of something at all. Having to do less things frees you up from having to worry at all about whether the thing is getting done, whether you forgot anything important, etc, which helps reduce stress a lot. Plus, I can’t even count how many times I’ve thought “this is just going to be temporary/​one-off and won’t be part of an important feedback loop” about something that eventually ended up in an important feedback loop anyways.

Finally, not all time is made equal. You only get a few productive hours a day, and those hours are worth a lot more than hours where you can’t really get yourself to do anything that requires too much cognitive effort or focus. Thus, even if it doesn’t save you any time, finding a way to reduce the amount of cognitive effort needed could be worth a lot. For example, making the UI for something you need to check often more intuitive and displaying all the things you care about in easily interpretable formats (as opposed to just showing the raw data and forcing you to work things out in your head, even if that computation is trivial) could be worth a lot by freeing up your productive hours for less-automatable things.

Of course, premature optimization is still bad. I’m not saying you should go and optimize everything; it’s just that there are other factors worth considering beyond the raw time you save. Figuring out when to optimize and when not to is its own rabbit hole, but knowing that time isn’t everything is the first step.