A paper first published in the Empirical Software Engineering journal reports: "TDD seems to be applicable in various domains and can significantly reduce the defect density of developed software without significant productivity reduction of the development team." The study compared 4 projects, at Microsoft and IBM that used TDD with similar projects that did not use TDD.
The paper was authored by Nachiappan Nagappan (microsoft), E. Michael Maximilien (IBM), Thirumalesh Bhat (Microsoft), and Laurie Williams (North Carolina State University), and published in Volume 13, Number 3 of the journal Emperical Software Engineering. It is also available from the Empirical Software Engineering Group Microsoft Research.
The paper includes 1 case study at IBM and 3 from Microsoft. Each of the case studies compare two teams working on the same product, using the same development languages and technologies, under the same higher-level manager, only one of which was using test-driven development (TDD). None of the teams knew that they would be part of the study during their development cycles. The IBM case study followed teams doing device driver development. The Microsoft cases followed teams working on Windows, MSN, and Visual Studio.
The paper describes the TDD practices used by the teams as minute-to-minute workflows, as well as task-level workflows.
Minute-to-Minute Workflow
- Write a small number of new tests
- Run the tests and see that they fail
- Implement code to satisfy the tests
- Re-run the new unit test cases to ensure they now pass
Task-Level Workflow
- Integrate new code and tests into the existing code base
- Re-run all the test cases to ensure the new code does not break anything
- Refactor the implementation and/or test code
- Re-run all tests to ensure that the refactored code does not break anything
The pre-release defect density of the four products, measured as defects per thousand lines of code, decreased between 40% and 90% relative to the projects that did not use TDD. The teams' management reported subjectively a 15–35% increase in initial development time for the teams using TDD, though the teams agreed that this was offset by reduced maintenance costs.
These results can be compared to those found in a paper published in 2006 by Maria Siniaalto. That paper attempted to review and summarize the results from 13 other studies on test-driven development, including research conducted in industrial, semi-industrial, and academic contexts. Among the conclusions of the paper, the author wrote:
Based on the findings of the existing studies, it can be concluded that TDD seems to improve software quality, especially when employed in an industrial context. The findings were not so obvious in the semiindustrial or academic context, but none of those studies reported on decreased quality either. The productivity effects of TDD were not very obvious, and the results vary regardless of the context of the study. However, there were indications that TDD does not necessarily decrease the developer productivity or extend the project leadtimes: In some cases, significant productivity improvements were achieved with TDD while only two out of thirteen studies reported on decreased productivity. However, in both of those studies the quality was improved.
What are your experiences with TDD? Have you seen an increase in quality? What effects have you seen on developer productivity, and development time? Leave a comment and share your experiences.