This is the sequel to the article 5 Ways to Boost Software Quality for FREE. The items described there were primarily concerned with the advanced use of the code editor that can reduce developer effort significantly.
In this article, I will go beyond code editor features, and move into advanced tools and techniques that pack some real punch. I'll continue the list where I left off:
#6 - Continuous Quality
In this era of Continuous Integration / Continuous Deployment (CI/CD), I haven't heard anybody else talking about Continuous Quality (CQ). What is that? I'm referring to the on-the-fly use of metrics tools, linters, and other static analyzers as you are developing the code. I generally aim for a clean, zero-warning run of 3 distinct static analyzers with their strictest settings prior to committing any code. Is this time-consuming? Not really. I have my editor configured to run them all from a simple menu. It is worth realizing that any editor that allows you to configure multiple compilers will accept almost any command line tool, including a static analyzer as if it was just another compiler. It's not really critical which analyzers you use, though I would recommend that one of them is actually the compiler that your project is using, but set to the most strict, pedantic settings. Others I like (for C & C++) include PC-Lint, Splint (for C), and cppcheck. I also use Source Monitor for on-the-fly checks of code metrics, as well as overall project metrics. Other than PC-Lint, these are all free tools. This inherent quality advantage of the approach will eliminate most errors before they ever enter the system.
#7 - Continuous Refactoring
When you are working on an area where things can be improved, improve them. I'm not talking about "gold plating", or ego-centric changes. I'm talking about real technical debt. If it involves interfaces outside of your area of control, this will require some team coordination and approval from above, but it might still be worth it. Yes, there is a schedule to consider, but if the code is broken, then it shouldn't be in the build. If it's too complex, poorly structured, or poorly documented, it's going to add time to the review. Just take the plunge. If you are using the above techniques then most of the time, the increased effort created by leaving it will far exceed the effort to fix it. There's also the sad truth that once it gets baked into the build, it becomes subject to Newton's first law. It becomes an "object at rest". Continuous Refactoring definitely may feel like it reduces code velocity, but raising the quality as early as possible will have productivity and effort-reduction benefits that extend beyond the initial development.
#8 - PSP and TSP with Process Dashboard
Personal Software Process (PSP) is a fairly complex statistical approach toward continuous self-improvement in your software development. Process Dashboard, from Tuma Solutions, is a free tool that makes PSP way less complicated, and relatively easy to do, and provides some substantial project management benefits as well. I won't fully describe PSP here. There is an entire book on the topic. The idea behind PSP is that the developer should treat each code change (feature addition or bug-fix) as a micro-project with these 6 phases: Plan (estimate work), Design, Design Review, Write Code, Review Code, Compile, Test, Postmortem (Agilists might call this a personal retrospective). Time in each phase is measured. The number of bugs found during each phase and the time spent fixing them is also tracked. Then in the postmortem, the developer checks to see how well they did against their estimates. The reviews shown here are the developer's personal reviews of their own work, and they shouldn't be taken lightly. It takes some discipline to do them well. The focus is on removing any defects before moving to the next phase. The PSP approach provides developers with immediate feedback on how well they did relative to their own estimates, and with statistical tools that allow them to continuously improve. Without appropriate tools, I would consider PSP a nightmare. With Process Dashboard, it is a breeze. Process Dashboard keeps a timer on your desktop and automatically moves you from phase to phase as you check the completion box. Process Dashboard also supports the Team Software Process (TSP) through its Team Dashboard, which provides a team-wide framework and process to PSP (there is a book about that as well). Process Dashboard with its team tools provides the ability for team leaders to define a work-breakdown structure with top-down planning, which is then maintained and updated by individual team members with bottom-up planning. When the team members use the desktop app for tracking their own work, The team's schedule and work remaining are maintained in real-time, by the software, which also provides all of the common charts, milestone tracking, and Earned-Value Management numbers. Personal and team workflows can be created, and Process Dashboard can even be used for Agile projects simply by entering each sprint as a milestone, to which tasks can be allocated.
If all this sounds like a lot, consider this. Organizations that adopt TSP have typically seen productivity improvements ranging from 20% - 150%, schedule estimates consistently within the 4% - 10% range, and a 99% reduction of defects found in system testing. A 2004 report from the Cyber Partnership found, that of all of the standardized processes they reviewed (and yes, Agile methods were considered), only two were able to demonstrate significant reductions in bugs. Both "Correctness-by-Construction" (a formal methods approach) and TSP consistently yielded near-zero defects. This is a quality gain that can't be ignored.
#9 - Lightweight Formal Methods
Formal Methods are techniques that provide a way of mathematically proving the correctness of programs. It requires a lot of advanced training to implement, and it is considered by most experts to be infeasible for large systems because of the level of computing resources required. Lightweight Formal Methods provide much of the same benefit, but in some forms, require little advanced knowledge, and are easily applied to full systems. In this form, you may know them by some other names: Design-by-Contract (DbC), or Code Contracts. Code Contracts provide a way of specifying the behavior of an object or function. Essentially the programmer includes statements about the preconditions, post-conditions, and variable use. The code is then analyzed and a warning is raised if any of these assertions are violated.
While this might sound like substantial added effort, consider that it really just requires 4 additional statements for each function or method (preconditions, post-conditions, variables used, variables modified); although some tools also support more fine-grained specifications to describe internal behaviors. The analyzer can use these to find logic issues which would be very difficult to find in a manual review. This assures that fewer bugs will have to wait for test or deployment to be found.
So, where can you find these DbC tools? The analyzer may be a stand-alone tool, or in some cases it may be part of the compiler itself. Code Contracts are an integral part of the Eiffel programming language. SPARK Ada makes use of them as well. Microsoft Research developed a Code Contracts library for .NET languages, but it was never formally adopted. Until recently it was maintained as an open-source project, and it is still downloadable on Github. There are likely other languages for which Code Contract tools exist, but the big surprise is that an analyzer exists for ISO C 99. It is functionality available in Secure Programming Lint, aka Splint but it's not described in the regular documentation. While Splint, developed mainly at University of Virginia, has an amazing number of well-documented checks that it performs beautifully, it has a secret history. It started it's life at MIT as a formal methods tool called LC-Lint. Much of the formal methods functionality was left in Splint as it matured. Documentation for LC-Lint can unlock this functionality for you on Splint.
#10 - Documentation Tools
JavaDoc for Java, and SandCastle for C# are a couple of examples of documentation tools, and if you are using either, it's a perfectly fine option. I prefer doxygen because it works for many languages, and supports both JavaDoc and SandCastle style tags. One popular free tool for generating documentation from code is doxygen. It can produce output in HTML, RTF/Word, and Latex, among others. The doxygen website is generated with doxygen. In HTML form, it allows you to navigate through your code like a website. It works on several languages. While it's useful on ordinary code, it is supercharged by the use of tags that are embedded in your code's comments. Furthmore, when the tags are defined in keyword-activated Code Templates, adding tags to comments that should already exist, becomes nearly effortless.
Use of a documentation generation tool can support development efforts by providing team members with clarity on how to interface with code outside of their own. It can accelerate code reviews by revealing relationships between various modules, and by presenting design information in a digestable format. This is not the same information that would typically be in a design specification, since it is derived from the code itself. I typically would not use the doxygen output to replace a formal design document, although it can be done, quite well. Some organizations explicitly include enhanced design information in their source code, and some very good formal documentation can be created from it.
#11 - Bonus: Code & Document Generation with Model Driven Architecture
I am a longtime fan and user Sparx Enterprise Architect (EA), best known as a UML modeling tool, but much more powerful than that would suggest. Over many years, I have developed and refined templates for Code and Document generation. Round-Trip Engineering, which was popularized in the 1990's, has never held much appeal for me. When I build a model, I build it with the intent of generating the code, to be used as-is. If it generates a bug, I fix the model or the template; but in the end, what I generate is what I compile -- and it looks exactly as if it came from my editor, right down to the doxygen tags. The same philosophy holds true for documents. Although there are some techniques that can allow part of the document to be manually created while other parts are straight from the template, I've generally been able to use the document straight out of the generator. The benefit of doing it this way is that I don't spend time re-doing edits after each generation cycle; my documents are always up to date with the code, and always look polished. Of course, since UML is for modeling Object-Orient Analysis & Design, the code generation works best for Object-Oriented programming languages. It is possible to generate for non-object-oriented languages, but the code structure is not likely to mirror anything that a developer would write by hand. Still, it is fairly straight forward to create designs that map easily to non-OO , so I still prefer do creating my documents using EA. There are quality benefits of code and document generation aren't immediately obvious, except for the fact that documentation stays up-to-date, which is rare in the software world; but some articles suggest that modeled software tends to exhibit defect rates similar to software developed using formal methods. I can attest that my own results have been very good.
My next article, Boost your Software Quality for FREE? Can it be done? will examine results that I have personally seen in employing these techniques.