BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Opinion: Code Coverage Stats Misleading

Opinion: Code Coverage Stats Misleading

John Casey,a key contributer to the Apache Software Foundation's Maven Project, recently spent some time refactoring Maven's assembly plugin.  He thought he'd use coverage reporting to mark his testing progress, and to make sure he didn't break anything as he went.  Well, at very least it was a learning experience.

He constructed a completely new suite of tests to focus on small, specific units of the new plugin's implementation.  His refactoring went well, as did the testing.  He had nearly perfect coverage numbers on the parts he felt were critical and felt fairly confident that this plugin would almost drop in as a replacement of the old incarnation.  That's when things started to fall apart.

You can read the details on Casey's blog, along with his reasons for saying: when you're seeking confidence through testing, perhaps the worst thing you can do is to look at a test coverage report.

His conclusion: coverage reporting is dangerous. It has a tendency to distract from the use cases that should drive the software development process. It also tends to give the impression that when the code is covered, it is tested and ready for real life. However, it won't tell you that you've forgotten to test the contract on null handling. It won't tell you that you were supposed to check that String for a leading slash, and trim it off if necessary. To achieve real confidence in your tests, you would have to achieve multiple coverage for most of your code, in order to test each line under various conditions... And, since there is no hard-and-fast rule about how many test passes it will take to test any given line of code adequately, the concept of a coverage report is itself fatally flawed.

Rate this Article

Adoption
Style

BT