On 100% Unit Test Coverage and Other Nonsensical Ideas
I will simply riff for a while on things I read Joel Spolsky say (I read a transcript of a podcast from here) about test-driven development.
Nobody should strive for 100% test coverage, let alone microtest coverage, for obvious reasons. Among those obvious reasons, I find two glaring ones: we generally don’t agree on what the term means; and trying to do it leads to writing tests for their own sake, rather than as a means to write sufficiently correct software. I have learned two important things through practice and observation: no single optimal number for test coverage can exist for all projects; and if you insist on an optimal number for test coverage, choose 85%, meaning that the average team ought to test all but the most straightforward 15% of the average system. When I practise TDD, I end up with around 85% test coverage because of the way I apply the principle of Too Simple to Break. Among the 15% untested you will find dead simple get/set methods and dead simple delegation. When Joel concludes something based on the hypothesis that otherwise thoughtful people have 100% test coverage as a goal, he runs well off the track. I don’t doubt that some people seek 100% test coverage, because those people help keep me in business. Stop it, or I’ll bury you alive in a box.
Some innocuous-looking changes cause an unusual number of tests to fail. While I don’t like this situation, I draw a different conclusion from it than Joel does. I conclude that this points to a design flaw worth exploring. I don’t have a “proof” for this, but I have observed good results when I have treated my own designs this way. In this vain, I follow the maxim I learned from the Pragmatic Programmers: abstractions in code and details in metadata. Joel uses this example:
Because you’ve changed the design of something… you’ve moved a menu, and now everything that relied on that menu being there… the menu is now elsewhere. And so all those tests now break. And you have to be able to go in and recreate those tests to reflect the new reality of the code.
I don’t understand why we would have microtests that check that a specific menu shows up in a specific location. Remember: abstractions in code and details in data. Also remember: three strikes and you refactor. Putting these two principles together, once I have a few menu items, I’ve extracted the details about individual menus to some List
of Menu
objects and an engine that operates on them. Also, I have probably separated that List
from the code that presents the menus. If I don’t like how my code presents the menus, I can fix that with test data that has no bearing on the actual menus. If I don’t like the order of my menus, I can change the Menu
objects in the List
—just data—and do a quick manual inspection without having to change the tests for my menu-presenting engine. Moving a specific menu somewhere should not require any code changes; and if it does, then you have a design flaw: you have details in code. Stop it, or I’ll bury you alive in a box.
In general, if a single change causes an unusually high number of tests to fail, then your tests have a duplication problem. Stop letting duplication flourish in your tests, or I’ll bury you alive in a box.
Thanks for Bob Newhart for the line “Stop it, or I’ll bury you alive in a box”, which I heard for the first time on Mad TV.