In an interesting article on Design and Code reviews Kirk Knoernschild mentions that such reviews promise to improve software quality, ensure compliance with standards, and serve as a valuable teaching tool for developers. However, the way they are performed affects their value. In some organizations they might really add value to the software lifecycle whereas in others a review might just be a part of the bureaucracy.
He mentions some of the worst review practices as
Witch hunt reviews - Causing the developers who wrote the code to feel threatened and attacked.
Curly brace reviews - Emphasizing just on structure and indentation instead of serious issues.
Blind reviews - Reviewers have never looked at the code before and come to the review meeting unprepared.
Exclusionary reviews - Reviewing only a sample of the code and leaving out other important pieces.
Tree killer review - Waiting until the codebase becomes so large such that neither a complete review is possible nor effective.
Token review - Doing the review as a formality just because the management intends to get it done.
World review – Conducting the review infront of a huge audience, many of whom are not related to the project, there by intimidating the developers.
Kirk seems to mention that in order to do effective reviews; the team should try to automate the review process as much as possible and gather metrics. The team should try to incorporate feedback mechanisms in their existing development environment so that the developers are alerted about the red flags before they are ready to check in code.
He mentions some tools which help in bringing greater objectivity and focus to the review process, like
- JDepend for design quality
- PMD for code cleanliness
- JavaNCSS for code quality and
- Emma for test coverage
Kirk further mentions an interesting way to do reviews, called the 20% review
The idea behind the 20% review is simple: once 20% of development is complete, a review should be held. Some teams might find it beneficial to hold the 20% review each iteration. That's certainly effective, but I've found that if teams do a good job using metrics for a continuous review, holding the 20% review for each major system function is sufficient.
The 20% review should focus on initial design and code cleanliness. The metrics discussed above offer wonderful insight into the evolution and growth of the code while the size is still relatively manageable.
He concludes by emphasizing that using metrics to help drive reviews brings greater objectivity and focus to the review process.The greater the automation the easier it would be to arrive at those metrics and thus do an effective review. The review should also be held early enough so that the developers can utilize the learnings early on and the effectiveness of the review is not compromised.