What’s outlined there is rational under a particular set of tradeoffs. If a typical software company implemented that methodology correctly, they would go out of business, because they would take longer than their competitors to produce a more expensive product. Most of the things that software is used for simply don’t need those extra nines of reliability.
As a user of those apps i strongly disagree.
If i add up the time spent waiting for crashed software, filing bug reports and troubleshooting incompatibilities i have lost a considerable portion of my lifetime.
Also if software “just works” my company could save millions on the IT support department. And yes, we would be willing spend some extra money if a software manufacturer can back up these claims.
You can disagree if you want to. If I disagreed, I would feel bound by rationality to explain why the customers are so stupid as to make these same buggy software products among the most commercially successful endeavors in human history.
I endorse explaining things. That said, you make it sound like the existence of a thriving market for cheap low-quality goods is much stronger evidence against the existence of a market for expensive high-quality goods than it seems to me to be.
Hrm? I had taken mwengler to be making a different point: the lack of a market for high-quality software outside life-critical applications suggests that such software is not cost-effective to produce.
Bingo on asr. Engineers and economists do the same thing: optimize. It is as expensive a mistake to put $1billion more into something than it is worth as it is to put $1billion less into something than it is worth.
The overwhelming success of markets for software at the quality at which it is at is not indicative of a failure of the market or even of the software. It is indicative that the right tradeoff between fixing bugs, new features, delay, and more development money is where it is, that higher quality software might even exist and simply not make money.
It is tremendously important to realize in economics, engineering, and probably other fields, that perfection is infinitely expensive and is therfore provably NOT the goal.
There’s one important caveat here, which I want to call attention to. There are externalities here. Some of the cost of bad software is paid by people out across the network who receive spam, DDOS attacks etc, that would have been prevented if I had ran a more secure system. So it might be that the economically optimal level of software quality is higher than the current market would imply.
That said, i agree the optimal level is probably far short of perfection. It happens regularly that some program on my machine will crash (without affecting the rest of the system.) I’m not willing to pay very much to reduce the rate of such events.
This still leaves the possibility that people are underestimating the cost to them of fairly unreliable software. Lowering the threshold to effective action can make a big difference.
Yes, but if your company were actually presented with such reliable software, the answer would be “well obviously we meant software that otherwise does what we want. This stuff doesn’t have half the features we need, and it’s almost completely unusable. We can’t deploy this, or we’ll be getting five calls about usability issues for every call we used to get about crashes and compatibility problems.”
Bottom line: what you trade away with the NASA approach isn’t only money. It’s also development speed. Okay if the application remains unchanged for three decades and the users spend a few years of their lives doing nothing but training, not so good otherwise.
What’s outlined there is rational under a particular set of tradeoffs. If a typical software company implemented that methodology correctly, they would go out of business, because they would take longer than their competitors to produce a more expensive product. Most of the things that software is used for simply don’t need those extra nines of reliability.
As a user of those apps i strongly disagree. If i add up the time spent waiting for crashed software, filing bug reports and troubleshooting incompatibilities i have lost a considerable portion of my lifetime.
Also if software “just works” my company could save millions on the IT support department. And yes, we would be willing spend some extra money if a software manufacturer can back up these claims.
You can disagree if you want to. If I disagreed, I would feel bound by rationality to explain why the customers are so stupid as to make these same buggy software products among the most commercially successful endeavors in human history.
I endorse explaining things. That said, you make it sound like the existence of a thriving market for cheap low-quality goods is much stronger evidence against the existence of a market for expensive high-quality goods than it seems to me to be.
Hrm? I had taken mwengler to be making a different point: the lack of a market for high-quality software outside life-critical applications suggests that such software is not cost-effective to produce.
Bingo on asr. Engineers and economists do the same thing: optimize. It is as expensive a mistake to put $1billion more into something than it is worth as it is to put $1billion less into something than it is worth.
The overwhelming success of markets for software at the quality at which it is at is not indicative of a failure of the market or even of the software. It is indicative that the right tradeoff between fixing bugs, new features, delay, and more development money is where it is, that higher quality software might even exist and simply not make money.
It is tremendously important to realize in economics, engineering, and probably other fields, that perfection is infinitely expensive and is therfore provably NOT the goal.
There’s one important caveat here, which I want to call attention to. There are externalities here. Some of the cost of bad software is paid by people out across the network who receive spam, DDOS attacks etc, that would have been prevented if I had ran a more secure system. So it might be that the economically optimal level of software quality is higher than the current market would imply.
That said, i agree the optimal level is probably far short of perfection. It happens regularly that some program on my machine will crash (without affecting the rest of the system.) I’m not willing to pay very much to reduce the rate of such events.
This still leaves the possibility that people are underestimating the cost to them of fairly unreliable software. Lowering the threshold to effective action can make a big difference.
Yes, but if your company were actually presented with such reliable software, the answer would be “well obviously we meant software that otherwise does what we want. This stuff doesn’t have half the features we need, and it’s almost completely unusable. We can’t deploy this, or we’ll be getting five calls about usability issues for every call we used to get about crashes and compatibility problems.”
Bottom line: what you trade away with the NASA approach isn’t only money. It’s also development speed. Okay if the application remains unchanged for three decades and the users spend a few years of their lives doing nothing but training, not so good otherwise.
But how would one back up these claims? The difficulty of verification is one reason software markets sometime resembles lemon markets.