When evaluating whether to invest time in making things more efficient, I often see people compare the one off cost of make the thing more efficient to the expected future saved time when doing the thing. I think there is often an important third variable to track, that often swings such decisions from not worth the effort to definitely worth the effort. Namely, the expected increase in usage of the thing due to the reduction in friction of utilizing the thing. I in practice often find the final consideration to dominate.
Recent examples from my life:
Reducing the number of button presses needed for common workflows on my laptop means I both can navigate my laptop quicker and end up navigating more instead of procrastinating navigating because it is annoying.
Moving to a more central location in my city has meant I both save time commuting to things and also end up going to more things.
Automating the loading of context from my personal apps into AIs means I spend less time copy pasting context into AIs and also end up asking AIs more questions about my personal context.
I think it was Joel Spolsky (from the Microsoft Visual Basic and Excel team) who mentioned a rule of thumb that each 10% reduction in difficulty would roughly double the market of a piece of software. And Google once knew that even 10ths of seconds of page load time had a noticeable effect on usage. This seems consistent with your claim.
There’s an opposing force here, too: Opportunity cost. If you have 10 hours to automate something that you’ll use for 4 years, is there something else you could do with those 10 hours that offered even greater payoff? This is frequently a major factor, even in business contexts. “Yes, it would be profitable, and it would be fun, but it would involve solving 5 hairy problems that only benefit a single big customer. With the same resources, we could solve 5 other hairy problems that benefit 10 customers each.”
When evaluating whether to invest time in making things more efficient, I often see people compare the one off cost of make the thing more efficient to the expected future saved time when doing the thing. I think there is often an important third variable to track, that often swings such decisions from not worth the effort to definitely worth the effort. Namely, the expected increase in usage of the thing due to the reduction in friction of utilizing the thing. I in practice often find the final consideration to dominate.
Recent examples from my life:
Reducing the number of button presses needed for common workflows on my laptop means I both can navigate my laptop quicker and end up navigating more instead of procrastinating navigating because it is annoying.
Moving to a more central location in my city has meant I both save time commuting to things and also end up going to more things.
Automating the loading of context from my personal apps into AIs means I spend less time copy pasting context into AIs and also end up asking AIs more questions about my personal context.
Yes, Dan Luu wrote about how he writes a lot because he’s a fast typer.
See also Jevon’s paradox.
I think it was Joel Spolsky (from the Microsoft Visual Basic and Excel team) who mentioned a rule of thumb that each 10% reduction in difficulty would roughly double the market of a piece of software. And Google once knew that even 10ths of seconds of page load time had a noticeable effect on usage. This seems consistent with your claim.
There’s an opposing force here, too: Opportunity cost. If you have 10 hours to automate something that you’ll use for 4 years, is there something else you could do with those 10 hours that offered even greater payoff? This is frequently a major factor, even in business contexts. “Yes, it would be profitable, and it would be fun, but it would involve solving 5 hairy problems that only benefit a single big customer. With the same resources, we could solve 5 other hairy problems that benefit 10 customers each.”