As a programmer, compared to other programmers, I am extremely uninterested in improving the speed of web apps I work on. I find that (according to my judgement) it rarely has more than a trivial impact on user experience. On the other hand, I am usually way more interested than others are in things like improving code quality.
I wonder if this has to do with me being very philosophically aligned with Bayesianism. Bayesianism preaches to update your beliefs incrementally, whereas Alternative is a lot more binary. For example, the way scientific experiments work, your p-value either passes the (arbitrary) threshold, or it doesn’t, so you either reject the null, or fail to reject the null, a binary outcome.
Perhaps people are frequently disinterested in subjective things like improving code quality or usability because it is hard to get a sort of “statistically significant” amount of evidence to say stuff like “this code quality improvement is having this level of impact”, and so people default to “fail to reject the null”. On the other hand, a more Bayesian way of thinking about it is to just do your best to make a judgement, and shift your beliefs accordingly.
For things like performance optimization, the results are pretty objective. You can run an analysis and see that eg. rendering was sped up by 75ms, and so you can “reject the null” pretty easily and conclude that there is a real, concrete benefit.
Speed improvements are legible (measurable), although most people are probably not measuring them. Sometimes that’s okay; if the app is visibly faster, I do not need to know the exact number of milliseconds. But sometimes it’s just a good feeling that I “did some optimization”, ignoring the fact that maybe I just improved from 500 to 470 milliseconds some routine that is only called once per day. (Or maybe I didn’t improve it at all, because the compiler was already doing the thing automatically.)
Code quality is… well, from the perspective of a non-programmer (such as a manager) probably an imaginary thing that costs real money. But here, too, are diminishing returns. Changing spaghetti code to a nice architecture can dramatically reduce future development time. But if a function is thoroughly tested and it is unlikely to be changed in the future (or is likely to be replaced by something else), bringing it to perfection is probably a waste of time. Also, after you fixed the obvious code smell, you move to more controversial decisions. (Is it better to use a highly abstract design pattern, or keep the things simple albeit a little repetitive?)
I’d say, if the customer complains, increase the speed; if the programmers complain, refactor the code. (Though there is an obvious bias here: you are the programmer, and in many companies you won’t even meet the customer.)
I’d wager that customers (or users) won’t complain about slow code, especially if there’s many customers, for the same reason that most people don’t send emails with corrections or typos on most online posts.
For example, the way scientific experiments work, your p-value either passes the (arbitrary) threshold, or it doesn’t, so you either reject the null, or fail to reject the null, a binary outcome.
Ritualistic hypothesis testing with significance thresholds is mostly used in the social sciences, psychology and medicine and not so much in the hard sciences (although arbitrary thresholds like 5 sigma are used in physics to claim the discovery of new elementary particles they rarely show up in physics papers). Since it requires deliberate effort to get into the mindset of the null ritual I don’t think that technical and scientific-minded people just start thinking like this.
I think that the simple explanation that the effect of improving code quality is harder to measure and communicate to management is sufficient to explain your observations. To get evidence one way or another, we could also look at what people do when the incentives are changed. I think that few people are more likely to make small performance improvements than improve code quality in personal projects.
As a programmer, compared to other programmers, I am extremely uninterested in improving the speed of web apps I work on. I find that (according to my judgement) it rarely has more than a trivial impact on user experience. On the other hand, I am usually way more interested than others are in things like improving code quality.
I wonder if this has to do with me being very philosophically aligned with Bayesianism. Bayesianism preaches to update your beliefs incrementally, whereas Alternative is a lot more binary. For example, the way scientific experiments work, your p-value either passes the (arbitrary) threshold, or it doesn’t, so you either reject the null, or fail to reject the null, a binary outcome.
Perhaps people are frequently disinterested in subjective things like improving code quality or usability because it is hard to get a sort of “statistically significant” amount of evidence to say stuff like “this code quality improvement is having this level of impact”, and so people default to “fail to reject the null”. On the other hand, a more Bayesian way of thinking about it is to just do your best to make a judgement, and shift your beliefs accordingly.
For things like performance optimization, the results are pretty objective. You can run an analysis and see that eg. rendering was sped up by 75ms, and so you can “reject the null” pretty easily and conclude that there is a real, concrete benefit.
Speed improvements are legible (measurable), although most people are probably not measuring them. Sometimes that’s okay; if the app is visibly faster, I do not need to know the exact number of milliseconds. But sometimes it’s just a good feeling that I “did some optimization”, ignoring the fact that maybe I just improved from 500 to 470 milliseconds some routine that is only called once per day. (Or maybe I didn’t improve it at all, because the compiler was already doing the thing automatically.)
Code quality is… well, from the perspective of a non-programmer (such as a manager) probably an imaginary thing that costs real money. But here, too, are diminishing returns. Changing spaghetti code to a nice architecture can dramatically reduce future development time. But if a function is thoroughly tested and it is unlikely to be changed in the future (or is likely to be replaced by something else), bringing it to perfection is probably a waste of time. Also, after you fixed the obvious code smell, you move to more controversial decisions. (Is it better to use a highly abstract design pattern, or keep the things simple albeit a little repetitive?)
I’d say, if the customer complains, increase the speed; if the programmers complain, refactor the code. (Though there is an obvious bias here: you are the programmer, and in many companies you won’t even meet the customer.)
I’d wager that customers (or users) won’t complain about slow code, especially if there’s many customers, for the same reason that most people don’t send emails with corrections or typos on most online posts.
Ritualistic hypothesis testing with significance thresholds is mostly used in the social sciences, psychology and medicine and not so much in the hard sciences (although arbitrary thresholds like 5 sigma are used in physics to claim the discovery of new elementary particles they rarely show up in physics papers). Since it requires deliberate effort to get into the mindset of the null ritual I don’t think that technical and scientific-minded people just start thinking like this.
I think that the simple explanation that the effect of improving code quality is harder to measure and communicate to management is sufficient to explain your observations. To get evidence one way or another, we could also look at what people do when the incentives are changed. I think that few people are more likely to make small performance improvements than improve code quality in personal projects.