PM Notebook
Our Lessons Learned in Implementing Document and Code Inspections
Bill Hoberecht - This email address is being protected from spambots. You need JavaScript enabled to view it.   

Recognizing that the quality of our most recent product release was a problem, we knew that the next release had to be much better.  We chose to implement a handful of improvements that dramatically changed the organization and gave us stellar results.  Our poor product quality was directly addressed with an organization-wide implementation of peer reviews.  Introducing peer reviews and using them broadly in the project contributed significantly to the dramatic increase in the measured quality of our product.

 

Our Journey to Implementing Peer Reviews – Well Worth It

The pressure to finishing the project on schedule was intense.  For months it felt as though we were all working non-stop 24x7.  In the end we did achieve our ‘code complete’ milestone as scheduled, but a short time later the system test reports confirmed what we all knew:  our software was somewhat buggy – indeed our backlog of trouble reports was so large that we could hardly make a dent in the list even by addressing only the most severe problems prior to the release date.  And so began our nightmare of supporting a few hundred thousand lines of defect-ridden software.

But that’s only the prologue to our story.  Recognizing the difficulty of our situation and needing to avoid a repeat performance, we implemented a small set of changes that transformed the project team and enabled successful deliveries over the coming 14 years!  One key change was a project-wide implementation of peer reviews; starting from a well-defined ‘Fagan’ code inspection process, we made adaptations that fit our project culture and moved us to the result we needed: substantial quality improvements.

This narrative shares our experience and some of the lessons we learned before, during and after the initial implementation of peer reviews.

  

Our Key ‘Lessons Learned’

Our experience implementing and consistently using inspections was uniformly positive.  Here is my take on key points of our implementation and lessons we learned as we proceeded to benefit from inspections (more information on each of these points follows in the article text):

  1. Simply expecting team members to each ‘do high quality work’ will not work.
  2. Make an unmistakably clear project-wide decision to implement peer reviews.
  3. Train the entire team as a group on the chosen peer review methods.
  4. Adapt and adjust peer review methods to get you the results that you need.
  5. With peer reviews firmly in the organization’s culture, sustained benefits will surely result.
  6. Use metrics and measurements to give everyone visibility of product quality.

 

Can an Individual be Solely Responsible for the Quality of Their Work?

Our project had brought together over 100 very well-qualified software engineers, project managers and testers.  We had a sound architecture, good designs, a skeleton schedule and many other essentials.  We, however, were operating in a project environment with only one or two well-understood management processes and essentially no quality methods.  Quality was the responsibility of each individual.  Learn on your own. Do your best.  Hope for a good result. 

The superlative qualifications of each project member went a long way to getting designs created and code written, but it wasn’t enough to ensure the quality of the resulting system.  Each software engineer was faced with a decision on how to finalize their work products (e.g., a design, software) – most times the individual would just report to their manager that they were done and would move on to the next work task.  Sometimes an attempt would be made to get peer feedback via email or a group discussion, but this mostly resulted in disappointment because colleagues wouldn’t be able to devote the time needed to give adequate consideration of the work product.  Sometimes a manager’s approval would be requested, but this futile effort put too much confidence in a manager’s ability to provide credible feedback on technical work products. 

Learned Lesson #1:

Simply expecting team members to each ‘do high quality work’ will not work.

Our system testing provided all the evidence that was needed to conclude that very smart people can still produce poor quality software.  The lack of any well-understood collaborative method for ensuring deliverable quality was a major shortcoming.

  

Our Choice to Implement Inspections

Our project had brought together over 100 very well-qualified software engineers, project managers and testers.  As our defect-ridden system was being deployed and brought into operation, we began our planning of the next major system delivery.  We were quite aware that another rushed delivery of a poor-quality system could have significant ramifications, and for that reason we had a pronounced emphasis on quality.  This was decades ago, predating pair programming, Six-Sigma, the CMM/CMMI and even the relevant IEEE standards; we did, however, recognize that Michael Fagan’s published work on Software Inspections was directly applicable and could help us greatly. 

The choice was made to implement ‘Fagan’ Software Inspections as a mandatory practice.  In retrospect, this was a tremendously significant decision that was relatively easy to make.  It wasn’t a superficial decision either; the decision was to train every project member (management included) on the method, adopt Fagan inspections on all work products, implement a metrics program, and staff a part-time position to administer all inspections.  

Learned Lesson #2:

Make an unmistakably clear project-wide decision to implement peer reviews.

A decisive direction will establish organization-wide expectations on behaviors and results.  This decision, alone, will not suffice, but it is necessary; you’ll need to back it with other organizational change actions.

 

With the decision made, it was time to familiarize everyone with this approach of inspections.   Training for the 100-person (and growing) project team was particularly effective.  Everyone on the team attended a multi-day training session that built a foundation for inspections and trained us on the method – about 2/3 of time was spent going through example inspection meetings and letting us convince ourselves that inspections actually are effective.

Learned Lesson #3:

Train the entire team as a group on the chosen peer review methods.

This shared experience for team members will bring them together in understanding the goals, methods, and expectations.

  

Tailoring our Inspection Process

The success stories in our implementation are much like claims made by anyone who advocates the use of inspections, and we had many of these stories: troublesome problems found during code inspections that would have been nearly impossible to find during testing, a serious problem or two found in most code inspections, proliferation of good coding practices/a decrease in poor coding practices, and a general increase in knowledge about our software design.  Management support was exceptional, and summary inspection data was openly shared and consistently used in monitoring deliverable quality.  Most significantly, we had an order of magnitude fewer serious problems at project completion.

Of more interest are the process mutations - some would call these a ‘tailoring’ of inspection methods for our team.  Although some might criticize these alterations, we found that slightly modifying Mr. Fagan’s method gave us something infinitely more acceptable to our project.  These changes occurred over a period of about a year - we didn’t have a ‘final’ answer in mind when we started to implement inspections:

  • Inspecting all work artifacts.  Fagan defined inspections for design documents and source code.  We expanded the definition to include all management and technical artifacts (e.g., project plans, test plans, deployment plans, system architecture).   We never even considered inspecting certain operational documents such as status reports and executive briefings.
  • Gaining agreement via the inspection.  Strictly speaking, an inspection is to find errors and defects.  We went beyond this for certain a few management documents (e.g., project plan), and used the inspection meeting as the vehicle for achieving agreement.  It was not an entry requirement that all stakeholders signal their agreement prior to an inspection of the plan, although there was a tacit understanding that the author would identify and resolve any potentially significant problems prior to the inspection.
  • Solutions welcome, sometimes.  Inspections are for finding defects, not solving problems.  In most cases, that is an accurate description of our implementation.  However, we also allowed the inspection moderator the leeway to allow limited discussion on a possible solution.  Some code inspections and most inspections of management documents spent some time on solutions.
  • Alternatives to inspection.  Our governing practice was that all code be inspected before the start of system testing.  Developers would frequently make changes post-inspection; in most instances these were bug fixes and our rule in this case was to require a ‘desk check’ by another developer.

Learned Lesson #4:

Adapt and adjust peer review methods to get you the results that you need.

It is unlikely that an out-of-the-box method of peer reviews will be an instant fit for your project culture.  Devote some energy in your initial choice of the method or methods that will be consistent with your project environment.  As well, consider implementing an initial set of modifications and also allow for some later adjustments that will work well for your project team and get you the results that you need.

Through two years of rigorously inspecting technical documents, source code, and management plans our use of inspections became a good fit for the project and was used for many subsequent years until the team disbanded.  While it wasn’t an ‘out of the box’ Fagan inspection, it was a very effective implementation for our project; everyone was familiar with the method and the results were undeniable.

Learned Lesson #5:

With peer reviews firmly in the organization’s culture, sustained benefits will surely result.

Inspections because so ingrained for us that process documentation could probably have been discarded a year after initial implementation.  Even newcomers to the project readily became familiar with the method.  Best of all, our quality measures proved that the initial gains were sustained and became the platform for the next level of improvement.

  

Metrics and Reports

Early on we chose just a handful of metrics related to the inspection process. These measures we used by management to track inspection coverage and deliverable quality.  Specific inspection data, identifiable to an individual, was never published nor made available to management.  A part-time inspection administrator collected information from each inspection and created management reports.  These reports showed progress in completing inspections for management documents, technical documents, and code.  Summary reporting showed defect density information – because code modules/subsystems were never identified with the defect density information, any further investigation would be via the inspection administrator and not line management.

As well, the measures were used in each inspection meeting to ensure the quality of the process itself:

  • Preparation Rate.  Information collected at the start of each meeting about preparation time helped us develop some norms on the necessary preparation time for an effective inspection meeting.  The moderator would consult a project ‘preparation time’ graph (preparation time vs. size of work product to inspect) to ensure that the preparation time for that inspection was within the statistical limits; any preparation time outside of the calculated limits was cause for an automatic deferral of that inspection.
  • Defect Density.  At the conclusion of the meeting, the moderator or scribe would consult a project ‘defect density’ graph (number of serious defects vs. size of work product inspected) and determine if the inspect results were within limits.  Any results out of limits (essentially too few or too many defects found) would result in an automatic disposition of ‘re-inspect.’
  • Inspection Rate.  At the conclusion of the meeting, the moderator or scribe would consult a project ‘inspection rate graph (inspection meeting duration vs. size of work product inspected) and determine if the inspection meeting duration was within limits.  Any results out of limits (essentially too fast or too slow) would result in an automatic disposition of ‘re-inspect.’

 

Learned Lesson #6:

Use metrics and measurements to give everyone visibility of product quality.

Without inspection metrics, it would be rather difficult for management to have a grasp of the work product quality (at least until the first test results are available).  Inspection metrics also give each author potentially valuable feedback on the quality of their work within that project environment.

 

Implementing Peer Reviews in Your Organization

Our project had a well-recognized need for improving quality, and we had every ‘critical success factor’ (e.g., management support, adequate training, technical expertise) necessary for a successful implementation of inspections.  This showcase example is the only time in my career where this has been the case – perhaps your project situation is similar.  Even if your project circumstances may not be as conducive for implementing peer reviews, some of our learned lessons might be helpful.

Since we started using inspections, there have been massive changes in software engineering and the operations project teams:  project teams dynamically form and disband more frequently, countless variants of peer reviews now exist, teams and team members are often geographically dispersed,  few project teams are together for an extended period of time, various agile and software engineering methods render classic ‘peer reviews’ redundant, tools are available to support peer reviews, and time pressures seem to make it even more difficult to set aside the time for reviewing a colleagues work product.  Thus the choice to make when looking for techniques that will improve quality is not nearly as obvious today as it was a few decades ago – this is the topic of the third article in this series.