In my previous 2 articles, 5 Ways to Boost Software Quality for FREE, and 5 More Ways to Boost Software Quality for FREE, I described tool-supported techniques for raising software quality while simultaneously reducing organizational costs.

So, the natural question is "Does it work?". Are there significant improvements to be made from using these no-cost and low-cost tools and techniques? I'm convinced that the answer is "Yes!" In this article, I will provide one example where the gains were able to be quantified.

The techniques that I described were as follows:

#1 - Enhanced Syntax Highlighting

#2 - Code Formatting

#3 - Code Completion

#4 - Code Navigation Markers

#5 - Code Templates

#6 - Continuous Quality

#7 - Continuous Refactoring

#8 - PSP and TSP with Process Dashboard

#9 - Lightweight Formal Methods

#10 - Documentation Tools

#11 - Enhanced Code & Document Generation with Model Driven Architecture

The first five techniques directly reduce the programmer's initial efforts and boost the software quality as a side effect. The next five require additional up-front effort from the programmer but have an out-sized quality benefit, with the ultimate effect of reducing follow-on efforts in debugging, review, and verification. The last one really replaces much of the need for the first five by essentially replacing the code editor with a UML modeling tool.

How effective are these techniques?

It's difficult to say how much any one of these techniques has aided my productivity or quality. Each technique contributes, but I always use as many as I can in concert. In most situations, I am limited by the environment, the language, or the client as to which tools are available to me. As such, I have only had one opportunity where all of these techniques were employed (with the two exceptions that I was just starting to use PSP, not TSP, and I had not yet adopted #11). I used doxygen as my documentation generator, and for CQ, I used IAR's compiler for the MSP430, PC-Lint, and Splint. To be fully transparent, in this project, I also used an advanced static analyzer called PolySpace.

The project was a safety-critical embedded sub-system that performed health monitoring and command processing for a daughter board that contained a sensor. It also annunciated alarms and faults, and otherwise communicated with external equipment. It contained some pretty complex functionality. I was primarily hired for the design, and simultaneously to fortify and document the client's software development processes so that the product could be certified at SIL-2 under IEC 61508; but if there was time, I was told I should continue on with implementation and accomplish what I could before the contract ended. Initially, it was a 6-month contract. Progress on the implementation later led to a 2-month contract extension.

In this project, I had developed about 15000 lines of code (LOC) of C over an 8-month period. Roughly a month before I was scheduled to leave the project, a number of the modules were fully functioning and stable, so I submitted them, about 6000 LOC, for review. The remaining modules were nearly complete but were in the final stages of development, with a month left to go before my contract ended.

The result of the code review was that only three defects were found; all within comments. Two were punctuation errors, and 1 was a missing reference. Because there was a question about why I had done something in a non-obvious way (I had good cause), I gave myself another documentation defect for not explaining my approach in the comments. The review did not find a single issue with style, function, or logic.

I finished the remaining code a couple of days before my contract ended. I wasn't there for final code review, integration, or verification. I learned later that the integration team expressed to the manager how impressed they were with the structure and readability of the code, as they were debugging communication between it and another that they had built. They said that they had found one minor issue in the packet structure that was creating the issue.

I don't know whether my successors found that 1 remaining bug in a day, a week, or a month; but I know that the project took another 3 months to finish after my departure; I know that there were two other hardware platforms and their respective firmware that were part of the system; and I know that the total project length was 11 months, including design, development, debug, and verification of all of those pieces.

The sub-system I developed was 15000 LOC, written over 8 months. So if I assume the worst case, that the integration team used the full three months to find that bug and fully verify my code, I can conclude, with absolute certainty, that my development rate was at least 45 LOC per day.

Since I wasn't present at the final code reviews, I can only guess that the defects found in my entire code base were proportional to those found in that first review. The same process was followed. The same care was taken. Since I had 4 documentation bugs in 6000 lines, I can project that there were probably 10 documentation defects, plus the 1 functional defect that they found, for a total projection of 11 defects for the entire 15000 lines. This calculates to 0.73 defects per KLOC.

The commonly believed rule of thumb is that the average developer can complete 10 fully debugged lines of code per day. That rate has always seemed low to me. In his book, "Best Kept Secrets of Peer Code Review", Jason Cohen presents a real-world example, where they tracked a 10,000 LOC application that required 30 man-months to complete; this equates to 11.1 LOC per day, so I guess that rule of thumb isn't far off. Jason's book also tells us that a 10000-line program averages about 32 defects per 1000 lines of code.

So, if we use the absence of defects as a proxy for quality, it seems that my quality was 44X better than Cohen's stated average.

Let me assure you that without my venerable super-powered code editor, I'm no faster at coding than the average programmer. I'm probably a bit slower. I have on occasion been forced to write code using a simple text editor, such as Notepad, and I make a lot of mistakes that just don't happen when my tools are present. That said, by making use of the tools and techniques I've presented here, the cost of this 44X quality improvement, was a 4X increase in my code production rate. In other words, these techniques, when combined, reduced the cost of the software by 75% and increased quality by 4400%.

Could these results be surpassed in the future? It seems likely. The jury is still out on #11 for lack of hard data. I have a strong intuition that "Enhanced Code & Document Generation with Model Driven Architecture" (MDA) can produce high-quality code at much faster rates than those I previously recorded. I have worked on several projects with MDA and MDA-like approaches, but I have not yet had the opportunity to track data for a component or system of substantial size from inception to deployment-ready. I have seen data that showed rates approaching 100 LOC for a medium-sized telecommunication project using the Shlaer-Mellor method, a predecessor to MDA. With the advances in knowledge and tools since the Shlaer-Mellor method was developed, I have every reason to expect that MDA with an appropriate selection of the other techniques can take my code delivery rate to new highs.