The september 1752 example sounds like something you’ll find on a trivia show. It’s not really such a good example. It’s the exception rather than the rule. When I read this I feel like I’m back in elementary school being the detail obsessed nerd.
I can’t say anything about the minute example but seeing the trend is to take some obscure occurrence, pointing fingers and saying “how can you not know that?” and looking like a special snowflake to every regular person.
In practical terms, what are the merits of all those examples? Going back to the lists, some of them are probably bad design[1], like one example that a backup is a string of numbers 053901011991.html so let’s not focus on them.
[1] What constitutes “bad design” may vary; some people could probably easily filter through many files like that using ls. Some people prefer minimalism, others don’t feel compelled to use their processing power so sparingly, and would rather get the job done more quickly. (It seems like this has some time implications. If you have cleaner code, you can work with it more easily in the future, if you just want a task done and forget about it) So if I were to describe “bad design” in a way that holds some water, I would say that it hurts productivity.
If you program systems than you get bugs because of corner cases that you don’t anticipate. You need domain knowledge to know all the corner cases.
Actually, the proper solution is to practice defensive programming, not trust the user input, and be generous with sanity checks. Failing gracefully is much easier when your software knows it’s not in Kansas any more.
Which sounds nice right up until a production system shuts itself down gracefully a few hours before a daylights saving time switch purely because of tests which turn out to be more picky than the actual thing they’re supposed to be protecting.
Multi-byte characters can do surprising things to scripts designed to truncate logs written by someone who didn’t take into account the maximum size of characters and chinese production server names.
A lot of a programmers day can end up being related to fixing bugs due to incorrect assumptions or failing to take edge cases into account and knowing lots of edge cases and handling a reasonable portion of them right away is far better than making the most restrictive possible assumptions off the bat.
Sometimes it’s the sanity checks themselves which fail to handle input that the system itself could handle fine.
Do you know that “the system” can handle that input fine? If you do, why did you sanity check reject it?
Sanity checks are just code—you certainly can write bad ones. So?
You don’t want to end up running into the Y2Gay problem
That post argues via mind-numbingly stupid strawmen (or should that be strawschemas?). Yes, you should try to be not stupid, most of the time, to the best of your ability. I agree :-/
The september 1752 example sounds like something you’ll find on a trivia show. It’s not really such a good example. It’s the exception rather than the rule. When I read this I feel like I’m back in elementary school being the detail obsessed nerd.
I can’t say anything about the minute example but seeing the trend is to take some obscure occurrence, pointing fingers and saying “how can you not know that?” and looking like a special snowflake to every regular person.
In practical terms, what are the merits of all those examples? Going back to the lists, some of them are probably bad design[1], like one example that a backup is a string of numbers 053901011991.html so let’s not focus on them.
[1] What constitutes “bad design” may vary; some people could probably easily filter through many files like that using ls. Some people prefer minimalism, others don’t feel compelled to use their processing power so sparingly, and would rather get the job done more quickly. (It seems like this has some time implications. If you have cleaner code, you can work with it more easily in the future, if you just want a task done and forget about it) So if I were to describe “bad design” in a way that holds some water, I would say that it hurts productivity.
The whole point of the list is that there are exceptions to rules that most people consider to be true in all cases.
If you program systems than you get bugs because of corner cases that you don’t anticipate. You need domain knowledge to know all the corner cases.
Leap seconds manage to crash real world computer systems because their designers didn’t handle them properly.
You don’t want any software that has a calendar to crash simply because a user asks the system to show september 1752.
Actually, the proper solution is to practice defensive programming, not trust the user input, and be generous with sanity checks. Failing gracefully is much easier when your software knows it’s not in Kansas any more.
Which sounds nice right up until a production system shuts itself down gracefully a few hours before a daylights saving time switch purely because of tests which turn out to be more picky than the actual thing they’re supposed to be protecting.
Multi-byte characters can do surprising things to scripts designed to truncate logs written by someone who didn’t take into account the maximum size of characters and chinese production server names.
A lot of a programmers day can end up being related to fixing bugs due to incorrect assumptions or failing to take edge cases into account and knowing lots of edge cases and handling a reasonable portion of them right away is far better than making the most restrictive possible assumptions off the bat.
You don’t want to end up running into the Y2Gay problem: http://qntm.org/gay
Do you know that “the system” can handle that input fine? If you do, why did you sanity check reject it?
Sanity checks are just code—you certainly can write bad ones. So?
That post argues via mind-numbingly stupid strawmen (or should that be strawschemas?). Yes, you should try to be not stupid, most of the time, to the best of your ability. I agree :-/