I don’t understand why you think the graphs are not measuring a quantifiable metric, nor why it would not be falsifiable. Especially if the ratios are as dramatic as often depicted, I can think of a lot of things that would falsify it.
I also don’t find it difficult to say what they measure: The cost of fixing a bug depending on which stage it was introduced in (one graph) or which stage it was fixed in (other graph). Both things seem pretty straightforward to me, even if “stages” of development can sometimes be a little fuzzy.
I agree with your point that falsifications should have been forthcoming by now, but then again, I don’t know that anyone is actually collecting this sort of metrics—so anecdotal evidence might be all people have to go on, and we know how unreliable that is.
There are things that could falsify it dramatically, most probably. Apparently they are not true facts. I specifically said “falsifiable and wrong”—in the parts where this correlation is falsifiable, it is not wrong for majority of the projects.
About dramatic ratio: you cannot falsify a single data point. It simply happenned like this—or so the story goes. There are so many things that will be different in another experiment that can change (although not reverse) the ratio without disproving the general strong correlation...
Actually, we do not even know what are axis labels. I guess they are fungible enough.
Saying that cost of fixing is something straightforward seems to be too optimistic. Estimating true cost of the entire project is not always simple when you have more than one project at once and some people are involved with both. What do you call cost of fixing a bug?
Any metrics that contains “cost” in the name get requested by some manager from time to time somewhere in the world. How it is calculated is another question. Actually, this is the question that actually matters.
I don’t understand why you think the graphs are not measuring a quantifiable metric, nor why it would not be falsifiable. Especially if the ratios are as dramatic as often depicted, I can think of a lot of things that would falsify it.
I also don’t find it difficult to say what they measure: The cost of fixing a bug depending on which stage it was introduced in (one graph) or which stage it was fixed in (other graph). Both things seem pretty straightforward to me, even if “stages” of development can sometimes be a little fuzzy.
I agree with your point that falsifications should have been forthcoming by now, but then again, I don’t know that anyone is actually collecting this sort of metrics—so anecdotal evidence might be all people have to go on, and we know how unreliable that is.
There are things that could falsify it dramatically, most probably. Apparently they are not true facts. I specifically said “falsifiable and wrong”—in the parts where this correlation is falsifiable, it is not wrong for majority of the projects.
About dramatic ratio: you cannot falsify a single data point. It simply happenned like this—or so the story goes. There are so many things that will be different in another experiment that can change (although not reverse) the ratio without disproving the general strong correlation...
Actually, we do not even know what are axis labels. I guess they are fungible enough.
Saying that cost of fixing is something straightforward seems to be too optimistic. Estimating true cost of the entire project is not always simple when you have more than one project at once and some people are involved with both. What do you call cost of fixing a bug?
Any metrics that contains “cost” in the name get requested by some manager from time to time somewhere in the world. How it is calculated is another question. Actually, this is the question that actually matters.