I see that the there is a problem, but it seems that both charts support the same conclusion: the longer problem goes undetected, the more problems it brings.
Are there any methodological recommendations which are supported by one chart, but not the other?
As Software Engineering is too far from being a science anyway, correct sign of correlation seems to be everything that matters, because exact numbers can always be fine-tuned given the lack of controlled experiments.
seems that both charts support the same conclusion: the longer problem goes undetected, the more problems it brings
Yes, there’s a sort of vague consistency between the charts. That’s precisely where the problem is: people already believe that “the longer a bug is undetected, the harder it is to fix” and therefore they do not look closely at the studies anymore.
In this situation, the studies themselves become suspect: this much carelessness should lead you (or at least, it leads me) to doubt the original data; and in fact the original data appears to be suspect.
As Software Engineering is too far from being a science anyway
In this situation, the studies themselves become suspect: this much carelessness should lead
you (or at least, it leads me) to doubt the original data; and in fact the original data appears to
be suspect.
Of course, by 1989 both experience and multiple cause-and-effect explanations told people this is the case. And the two graphs are actually different data sets with the same conclusion, so it looks like people just took whatever graph they found quickly.
Comparing early quickly-found bugs and late quicky-found bugs is still impossible with this quality of data, but it is for the better. The real problem is not citing graph correctly—it is about what affects both bug severity and bug detection. Like having any semblance of order in the team.
As Software Engineering is too far from being a science anyway
Yes, that’s where I end up.
Are there people that claim this is about true science and not set of best practices? Maybe they are the real problem for now...
Typical quote: “Software engineering is defined as the systematic application of science, mathematics, technology and engineering principles to the analysis, development and maintenance of software systems, with the aim of transforming software development from an ad hoc craft to a repeatable, quantifiable and manageable process.”
And the publications certainly dress up software engineering in the rhetoric of science: the style of citation where you say something and then add “(Grady 1999)” as if that was supposed to be authoritative.
It will be impossible to make progress in this field (and I think this has implications for AI and even FAI) until such confusions are cleared away.
“Early bugs are cheap, late bugs are expensive” suggests that you start with quick and dirty coding and gradually add quality checks (automatic/manual testing, code reviews, up to formal proofs). “Long-undetected bugs are expensive” suggests that it’s best to be disciplined all the time.
Early-found bugs are cheap. Early-appeared bugs are dear. Quickly-found bugs are cheap, long-standing bugs are dear.
One claim is:
If you find the problem at moment “50”,you get high mortality if the problem started at moment “10″ and low mortality if the problem started at moment “40”.
Other claim is:
If the problem appeared at moment “10” you get higher treatment success if it is detected at moment “20″ than if it was detected at moment “50”.
So, there is Contraction and Detection, and the higher D-C, the more trouble you get. It is common sense now, and of course if you vary one of C and D, you get different signs of correlation between D-C and variable.
Actually, it is plausible-sounding, but I can assure you that in 5 minutes of thinking about it you can find at least one excellent alternative explanation for the observed association.
To me these claims are the equivalent of “we use only 10% of our brains”, they have a vague plausibility which explains that so many people have accepted them uncritically, but they don’t stand up to closer examination; unfortunately the damage has been done and you have to do a lot of work to persuade people to let go of the mistaken beliefs they have accepted, and that they now think are “scientific” or “proven by research”.
Actually, the same bug being found earlier rather than later will probably be cheaper to fix, the question about measure of that cheapness (and whether the difference always covers the cost of finding it earlier) is impossible to answer with current level of actual expense we as society are prepared to spend.
I see that the there is a problem, but it seems that both charts support the same conclusion: the longer problem goes undetected, the more problems it brings.
Are there any methodological recommendations which are supported by one chart, but not the other?
As Software Engineering is too far from being a science anyway, correct sign of correlation seems to be everything that matters, because exact numbers can always be fine-tuned given the lack of controlled experiments.
Yes, there’s a sort of vague consistency between the charts. That’s precisely where the problem is: people already believe that “the longer a bug is undetected, the harder it is to fix” and therefore they do not look closely at the studies anymore.
In this situation, the studies themselves become suspect: this much carelessness should lead you (or at least, it leads me) to doubt the original data; and in fact the original data appears to be suspect.
Yes, that’s where I end up. :)
Of course, by 1989 both experience and multiple cause-and-effect explanations told people this is the case. And the two graphs are actually different data sets with the same conclusion, so it looks like people just took whatever graph they found quickly.
Comparing early quickly-found bugs and late quicky-found bugs is still impossible with this quality of data, but it is for the better. The real problem is not citing graph correctly—it is about what affects both bug severity and bug detection. Like having any semblance of order in the team.
Are there people that claim this is about true science and not set of best practices? Maybe they are the real problem for now...
Typical quote: “Software engineering is defined as the systematic application of science, mathematics, technology and engineering principles to the analysis, development and maintenance of software systems, with the aim of transforming software development from an ad hoc craft to a repeatable, quantifiable and manageable process.”
And the publications certainly dress up software engineering in the rhetoric of science: the style of citation where you say something and then add “(Grady 1999)” as if that was supposed to be authoritative.
It will be impossible to make progress in this field (and I think this has implications for AI and even FAI) until such confusions are cleared away.
“Early bugs are cheap, late bugs are expensive” suggests that you start with quick and dirty coding and gradually add quality checks (automatic/manual testing, code reviews, up to formal proofs). “Long-undetected bugs are expensive” suggests that it’s best to be disciplined all the time.
Nope, it doesn’t follow from any of the graphs.
Early-found bugs are cheap. Early-appeared bugs are dear. Quickly-found bugs are cheap, long-standing bugs are dear.
One claim is:
If you find the problem at moment “50”,you get high mortality if the problem started at moment “10″ and low mortality if the problem started at moment “40”.
Other claim is:
If the problem appeared at moment “10” you get higher treatment success if it is detected at moment “20″ than if it was detected at moment “50”.
So, there is Contraction and Detection, and the higher D-C, the more trouble you get. It is common sense now, and of course if you vary one of C and D, you get different signs of correlation between D-C and variable.
Actually, it is plausible-sounding, but I can assure you that in 5 minutes of thinking about it you can find at least one excellent alternative explanation for the observed association.
To me these claims are the equivalent of “we use only 10% of our brains”, they have a vague plausibility which explains that so many people have accepted them uncritically, but they don’t stand up to closer examination; unfortunately the damage has been done and you have to do a lot of work to persuade people to let go of the mistaken beliefs they have accepted, and that they now think are “scientific” or “proven by research”.
There I cited one such reason without thinking 5 minutes
http://lesswrong.com/lw/9sv/diseased_disciplines_the_strange_case_of_the/5tz7
Actually, the same bug being found earlier rather than later will probably be cheaper to fix, the question about measure of that cheapness (and whether the difference always covers the cost of finding it earlier) is impossible to answer with current level of actual expense we as society are prepared to spend.