What comes to my mind is that a local optimum is being described—minimum software errors given the shuttle program, the large budget, the nature of the industry, etc (which are mentioned) -- without consideration of the preferable (to many people) maximum of ubiquitous cheap spaceflight. Developments that have occurred since 1996 seem to highlight this.
My bad, I missed that this went on for several pages. What I had in mind is in fact covered. (But by no means featured as prominently as you’d expect.)
The “cost” part of the tradeoff. How much more should you expect to pay to get the same functionality.
This is alluded to at the end, but too briefly IMO:
the groups $35 million per year budget is a trivial slice of the NASA pie, but on a dollars-per-line basis, it makes the group among the nation’s most expensive software organizations.
A secondary part of the same question is whether some of these precautions might in fact be excessive—is there any activity that you could not do and still have the same functionality at the same quality level.
More generally the question is one of credit assignment—an issue not just relevant in “methodology” but in learning theory as well, even up to AI theory: which parts of the “process” are to be held necessary and sufficient for the results, and in fact what ought to count as “process”? (For instance, is it even possible to change the people, holding “process” constant, and still get the same results? If it is possible, what are the relevant characteristics of the individuals?)
The following sentence from the article is obviously a lie, journalistic sensationalism of the same kind that leads to “IBM emulates a cat’s brain” headlines:
The process can be reduced to four simple propositions
It’s never that simple. And that is a key issue with the article overall: it wants to boil down something intrinsically complex to a few simplistic and easily stated conclusions.
One is social structure. 260 people is a large group, and there has been research suggesting that social structure is a more effective predictor of software defects than “technical” metrics.
With this size group coordination issues are going to loom large, so the way meetings are planned, organized and run will play a critical role in the group’s “distributed cognition”. The description of the relationship between “coders” and “verifiers” is tantalizing but almost certainly oversimplified.
Another is design. You don’t get near zero defects on half-million-line codebases without some strong design principles: modularity, avoidance of data coupling, and so on. Popular press won’t mention that because people’s eyes will glaze over, but I’m pretty sure that this code isn’t sprinkled all over with global variables. The article says “one-third of the process of writing software happens before anyone writes a line of code”—but then goes on to reveal practically nothing about this early part of the process, other than the cliché that it produces a lot of documentation.
The article makes a dangerous claim that its four broad conclusions generalize widely: it claims to “illustrate what almost any team-based operation can do to boost its performance to achieve near-perfect results”. The problem is that taking this kind of advice too literally leads directly to “cargo cult” software engineering.
You can easily make developers write thousand-page design specifications but that in no way guarantees defect-free code—in many cases reliance on written documentation is in fact a direct contributor to poor quality, insofar as a more interactive form of communication offers more opportunities for detecting and correcting errors.
Here’s a quick exercise in detecting bias. Can you find the article’s most glaring omission?
What comes to my mind is that a local optimum is being described—minimum software errors given the shuttle program, the large budget, the nature of the industry, etc (which are mentioned) -- without consideration of the preferable (to many people) maximum of ubiquitous cheap spaceflight. Developments that have occurred since 1996 seem to highlight this.
My bad, I missed that this went on for several pages. What I had in mind is in fact covered. (But by no means featured as prominently as you’d expect.)
What was it?
The “cost” part of the tradeoff. How much more should you expect to pay to get the same functionality.
This is alluded to at the end, but too briefly IMO:
A secondary part of the same question is whether some of these precautions might in fact be excessive—is there any activity that you could not do and still have the same functionality at the same quality level.
More generally the question is one of credit assignment—an issue not just relevant in “methodology” but in learning theory as well, even up to AI theory: which parts of the “process” are to be held necessary and sufficient for the results, and in fact what ought to count as “process”? (For instance, is it even possible to change the people, holding “process” constant, and still get the same results? If it is possible, what are the relevant characteristics of the individuals?)
The following sentence from the article is obviously a lie, journalistic sensationalism of the same kind that leads to “IBM emulates a cat’s brain” headlines:
It’s never that simple. And that is a key issue with the article overall: it wants to boil down something intrinsically complex to a few simplistic and easily stated conclusions.
Do you have any ideas about what the lurking complexity might be?
One is social structure. 260 people is a large group, and there has been research suggesting that social structure is a more effective predictor of software defects than “technical” metrics.
With this size group coordination issues are going to loom large, so the way meetings are planned, organized and run will play a critical role in the group’s “distributed cognition”. The description of the relationship between “coders” and “verifiers” is tantalizing but almost certainly oversimplified.
Another is design. You don’t get near zero defects on half-million-line codebases without some strong design principles: modularity, avoidance of data coupling, and so on. Popular press won’t mention that because people’s eyes will glaze over, but I’m pretty sure that this code isn’t sprinkled all over with global variables. The article says “one-third of the process of writing software happens before anyone writes a line of code”—but then goes on to reveal practically nothing about this early part of the process, other than the cliché that it produces a lot of documentation.
The article makes a dangerous claim that its four broad conclusions generalize widely: it claims to “illustrate what almost any team-based operation can do to boost its performance to achieve near-perfect results”. The problem is that taking this kind of advice too literally leads directly to “cargo cult” software engineering.
You can easily make developers write thousand-page design specifications but that in no way guarantees defect-free code—in many cases reliance on written documentation is in fact a direct contributor to poor quality, insofar as a more interactive form of communication offers more opportunities for detecting and correcting errors.