When I release components, example code or even just helper classes, I often tout 100% test coverage as a feature. Which (as I probably also state often enough :P), means that my unit tests execute 100% of all lines in the source code. But what advantage does this actually bring to a developer, and, just as interesting, what does having complete test coverage not mean?
For people practicing true TDD (test first -> red, green, refactor), 100% coverage is nothing unusual, though even they may decide to not write tests for all invalid inputs possible: if a piece of code satisfies all the tests and the tests cover everything the code should do, it’s enough. If you’re building a library on the other side, the use case of a customer providing invalid inputs will be a valid concern worthy of a test.
I, however, am currently adding unit tests to an existing code base and I decided to go for 100% test coverage. In this short article, I will explain why I see complete test coverage as a worthwhile goal, what effect going for that level of test coverage has on a project and what it says about the code.
Read More