Code Quality and Rule Consequentialism

(See also Taking the outside view on code quality)

Code quality. Such a divisive topic. To overgeneralize[1], managers always want things released soon, which means a quick and dirty version of the code. On the other hand, engineers always want to do it the “right way”, which means taking longer before releasing.

And that’s just one example. Here are some others.

  • Engineers want to refactor stuff. Managers say it’s not worth it.

  • Engineers want to take the time to write tests. Managers need new features to be released.

  • Engineers want to upgrade to the new version of the library. Managers say that’ll be too costly.

  • Engineers want to set up a linter. Managers don’t see how that’ll actually help the business.

  • Engineers want to spend time discussing things during code review. Managers feel like that always just gets in the way of meeting deadlines and doesn’t really matter.

And now for the million dollar question: who’s right? Are the engineers right in saying that you should take the time to do these sorts of things? Or are the managers right in saying that you should keep your eye on the prize and deploy features that are actually gonna improve the lives of end users?

Just kidding. We shouldn’t ask who’s right. Instead, let’s think about how we’d even go about answering these sorts of questions in the first place.

Two types of questions

I want to pause here for a second. I want to emphasize the distinction. “Who’s right?” and “How do we go about thinking about who’s right?” are two very different types of questions.

Let’s make it more concrete.

  1. Should we convert those class-based React components into functional components?

  2. How would we go about deciding whether to convert those class-based React components into functional components?

Do you see the difference? (1) is asking what we should do. (2) is asking how to go about deciding what we should do.

It’s the difference between 1) asking what we should eat for dinner and 2) asking how we should go about deciding what to eat for dinner. The answer to (1) might be “pizza”. The answer to (2) might be “we should think about how well different options perform along the dimensions of health, convenience, cost, and taste, and pick the option that performs best”.

For these debates about whether to prioritize code quality, I’ve seen commentary on the first type of question. On the other hand, I can’t really think of any commentary on the second type of question. At companies I’ve been at, or in the blogosphere, or even in conversations amongst my friends.

I think that second type of question is crucial though. Well, it is for me. It’s central to my own opinions and it explains why I lean especially hard in the direction of code quality.

Consequentialism

Let’s talk about moral philosophy[2]. In a previous version of this post I was going to talk about the three schools of thought: consequentialism, virtue ethics, and deontology. However, I’m not very good at explaining them, and I think that in the context of a business[3], consequentialism is the predominant perspective. CEOs care about results. Consequences. They’re not running their businesses according to the abstract ideals of virtues or Kant’s categorical imperative. So let’s meet them where they are currently standing here and assume that consequences are what matter.[4]

There are different types of consequentialism though, and I want to focus[5] on those differences.

I think a good place to start is with act consequentialism. Say your friend Alice approaches you and asks whether you think she looks good in her new dress. You don’t think she looks good in it. And to keep things simple, let’s suppose that your only options are to say “yes” or “no”.

An act consequentialist would think about the consequences of saying yes, the consequences of saying no, and choose the action that leads to the best consequences. Perhaps they think that saying yes would lead to better consequences because it’ll make Alice happier, whereas saying no would just hurt her self-esteem.

On the other hand, rule consequentialism takes a different perspective. I’m being a little colorful here, but I think rule consequentialism says something like this.

Look, I agree that consequences are ultimately what matter. In a perfect world, I’d want to choose the action that leads to the better consequences. I’m with you on that.

I just don’t think that people are very good at figuring out what actions lead to the best consequences. I think people are hopelessly biased. I think people are weak. They rationalize. They do what will be easy.

So in this example of your friend Alice, you will be biased towards what is easy, which is to tell her that she looks good. I don’t trust you to actually perform the calculus and figure out what option leads to the best consequences.

Instead, you could come up with rules ahead of time that lead to good consequences. Like “don’t lie”. Then when you are actually faced with situations in real life, just follow those rules. Such a strategy will lead to better consequences than the strategy of trying to calculate which actions will lead to the best consequences.

This is a little inflexible though. “Don’t lie” might do good most of the time, but it doesn’t always work. To use a classic example, what if a friend is hiding in your house and a murderer who is looking for your friend asks you if they are there? Do you lie? On the one hand it seems pretty clear that lying would produce the better consequences. But on the other hand, didn’t we just talk about how you can’t be trusted to do this calculus and instead should follow the rules that you agreed upon ahead of time?

Fortunately, rule consequentialists recognized this issue and have a pretty good response. Strong rule consequentialism says that rules can’t be broken. No matter what. So a strong rule consequentialist would[6] in fact say to follow the “don’t lie” rule and reveal that your friend is hiding.

That is pretty dumb though and is where weak rule consequentialism comes in. Weak rule consequentialism says that you could use your judgement about when to follow rules. Rules are there for guidance, but you don’t need to be a slave to them.

But that begs the question of “how do you know when to use your judgement”. I didn’t really research this, but I’m pretty sure that there are all sorts of different forms of weak rule consequentialism that answer this question differently.

One form is two-level consequentialism. It kinda divides things into 1) everyday situations where you should follow the rules and 2) extreme situations where you can think about deviating. That doesn’t really speak to me though. Surely we can trust people to use their judgement a little bit more than that, right?

Here’s how I see it. Yes, we are biased. Yes, we lean towards doing what is easy. Yes, it is helpful to have rules set ahead of time to guide us. But… well… “rules” is a bad term. I think “guidelines” is a better one. It is good to have guidelines. It is good to be aware of our biases. But from there, it is all a matter of judgement. Think about your biases. Think about what the guidelines say. Do that act consequentialist calculus. And then, incorporating[7] all of that stuff, make a decision and go with it.

Code quality

Let’s bring this back to code quality now. How does all this philosophy stuff relate to code quality? Well, I think that most discussions about whether it is worth investing in code quality approach the conversation like an act consequentialist.

They ask how much re-writing those class-based React components as functional components would actually help. How much easier is it to read that functional code? How much will it improve velocity? How much time will it take?

These are good questions. I think that we should try to answer them. I think it is worth engaging with the decision at the ground level. However, I don’t think that we should stop there. I think we need to incorporate some rule consequentialism into the mix.

What would that look like? Well, as an example, maybe you have a rule about how much time it is generally worth spending on refactoring. Or, rather, maybe you have various rules of the form “In a codebase of size X, it is wise to spend Y% of the time refactoring.” Maybe one example of this is “In a large codebase, it is wise to spend 30% of the time refactoring.”

Ok. Now think about your act consequentialist calculus. Try to extrapolate a bit. What if you performed that calculus on different refactoring decisions? Maybe it would add up to you only spending 5% of the time refactoring. But this violates that rule that said 30%.

I’m not saying that automatically means the decision should be to refactor your class-based components to functional components. I am saying that the 30% rule should influence your decision, causing you to lean more towards refactoring. And, more generally, that such rules need to have a seat at the table.


  1. ↩︎

    I suspect the things I say below about engineers vs managers might be triggering. It might be tempting to think something like “This is so uncharitable!” or “Hey, I’m a manager and I’m not that short sighted. What you’re describing is a straw man.” 1) I did say I’m overgeneralizing. 2) In my experience working for six different companies and talking to various friends, these stereotypes are actually pretty accurate. Not 100% accurate or even 95% accurate, but to make up a number, maybe they’re 75% accurate.

  2. ↩︎

    I know, I know. Philosophy? This is a post about software engineering. Moral philosophy sounds like a pretty big detour. How is it relevant? Is this guy one of those quacks who philosophizes too much and overthinks everything? All I can say about those objections right now is to bear with me. IMHO, it will be worth it in the end.

  3. ↩︎

    Not all code is written in the context of business though. It would be interesting to think about the merits of virtue ethics or deontology for something like open source software or a side project. Or even for a business that makes social good a strong priority alongside profits. Or, alternatively, a business like Basecamp that really prioritizes employee happiness as an end in and of itself.

  4. ↩︎

    Confused? Me too. How could something other than consequences actually matter? You could read the arguments made in the Stanford Encyclopedia of Philosophy for virtue ethics and deontology, but for whatever reason they just don’t “compute” with me, so I’m not going to try to explain them.

  5. ↩︎

    I’m going to be basing my explanations largely off of this video. I spent a few hours clicking around, reading on different websites and watching other videos. I didn’t have much luck though. Other resources felt too confusing. I’d also like to note that I’m not an expert here. I might have made some mistakes, so take this with a grain of salt.

  6. ↩︎

    Well, this is just an instructive example. In practice a real-life strong rule consequentialist would have come up with better, more nuanced rules in the first place. Like “don’t lie if X, Y and Z”.

  7. ↩︎

    If you are wise you should think about what your track record is for making such decisions, or similar decisions, in the past.