Which would you rather have: a disciplined programmer or a strong suite of tests? A client asked me this question recently, and I don't know whether I gave them a particularly good answer. I'm going to try to do that here.
I can see benefits to both situations, so unfortunately, I can't give a short answer to this question. On the one hand, a disciplined programmer will make sensible decisions even if they don't do what I would do. If I trusted the programmer's general discipline, then I'd feel comfortable trusting them to produce good work, by which I mean a steady stream of valuable code that works. On the other hand, if I rely on the discipline of a specific programmer or programmers, then they produce artifacts that I can't easily consume, then when they disappear, my steady stream of valuable code that works also disappears. It becomes the Truck Number problem. Since I use a good suite of tests to help reduce volatility in the marginal cost of features1, it might make sense for me to require that programmers supplying me with features also supply me with tests in order to retain the option of maintaining the code base cost-effectively without them.
So which would I prefer? In writing this article, we're about to find out. It really does depend. Let me explore this question, and perhaps with enough additional context, I can choose.
Just What Kind of "Disciplined"?
Suppose that the programmers in question have tremendous discipline, but don't write tests. What do I need from them—assuming that I can't have tests—that would make me feel comfortable taking over their code base? I have a few ideas.
- I need to know where in the code to find things.
- Once I find a thing in the code base, I need to know either (1) that I've found everything I need or (2) that I can figure out where else to look.
- I need confidence that when I change something, that doesn't affect unexpected things elsewhere.
This starts to sound an awful lot like following the Four Elements of Simple Design. This leads me to ask a different question.
If programmers write simple code (as defined by the Four Elements) without tests, what else do I need to feel comfortable taking over their work?
Funny: it has never come up. If one designed a system well according to the Four Elements, then I imagine I'd find it relatively easy to add tests after the fact. This follows from the basic notion that the Four Elements encourage us to write testable code, and by the definition of "testable", I should have little trouble testing testable code.
I'd prefer not to have to spend the extra time to add those tests, but I could toleratae it. This becomes a decision based mostly on business considerations: if it costs me ¤100 000 to add tests to your code, then you'd better have saved me more than that amount compared the best available alternative that includes having written tests. As long as you do that, what right do I have to complain?
I've set aside for the moment that I doubt I could ever hire programmers capable of writing simply-designed code without tests, except by the purest of accidents. I remain open to the possibility.
Time Really Doesn't Equal Money
Well... I can find one potential problem. The cost of taking over such a simply-designed code base includes not just money, but also time, and I can replace money, but I can never replace time.2 I have to take into account the cost of delay in having to add tests to this hypothetical code base. Fortunately, I can amortise this cost over the lifetime of the projects that maintain this code base, making it perhaps history's least expensive legacy code.
I think I can live with that.
Now, suppose I have simply-designed code that has saved me more in the construction than I'll end up paying in adding tests to it after the fact. Do I miss anything else significant by not receiving tests along with the code?
About This Code Base
I will definitely want some kind of one-page document describing the overall design of the system—the Five Big Things—as well as which features the programmers claim to have completed and which they would complete next, if they continued. Presumably, as the client of this hypothetical product, I'd already have considerable understanding of the product, the domain, and the features that I believe they have already built. (It wouldn't hurt to compare my perception of "what's done" with theirs.)
So this brings me to a "first candidate" final list of things I'd need to have in order to trust a code base written by "disciplined programmers" that didn't write tests.
- A short document describing the overall design of the system, the features that the programmers claim to have completed, and the features that the programmers intended to complete next.
- A code base that shows overwhelming evidence that the programmers have designed it simply according to the Four Elements of Simple Design.
- A steady rate of producing features that pays for the (relatively small amount of) extra time I have to spend to add tests as I maintain the code base myself.
I think that, if I have all these things, I can accept taking over a code base from a group of programmers that haven't written tests.
Of course, this doesn't mean that I recommend setting fire to all your tests just to see what happens.
Now what about the reverse? Suppose I had a strong suite of tests, but not particularly disciplined programmers? What risks would that introduce, and how could I mitigate or compensate for them?
What Kind of "Strong Suite of Tests"?
What would I expect from a "strong suite of tests"? I would want mostly microtests, written in the style of collaboration and contract tests3, with a sprinkling of customer tests written in the styles of the most caring practitioners of behavior-driven development. These customer tests would help me understand the features and the microtests would give me the confidence I need to change the code.
Here lies the problem: the cost of understanding and maintaining the tests likely corresponds quite well to the cost of understanding and maintaining the production code. This makes sense, since the same people likely wrote both. (I recognise that I might not justifiably assume this, but I will do so anyway.)
Since the programmers in question do not, by assumption, have the kind of discipline I'd prefer, I can expect to find design problems in the production code. Generally speaking, where I find design problems in the production code, I can expect to find tight coupling between the tests and implementation details in that production code. I can expect a lot of indirection without abstraction. In short, I can expect well-tested, well-covered legacy code. In theory I can rescue this legacy code, but I'd have to deal with the usual problems of volatility in the cost of rescuing the code. In this case, I'd probably find lower overall cost in rescuing the code, but similar volatility to that of legacy code without tests. This seems intuitively obvious, given that in some cases, I will find the tests so tightly coupled to implementation details that I will end up throwing the tests away and starting over.4 As tests reveal more about the implementation under test, they bury salient details about the essential behavior, and I find myself performing acts of archeology to uncover that essential behavior myself, tests or not.
I don't want to paint an entirely dire situation. Certainly, having tests increases the likelihood that I can change some (perhaps much) of the code safely, albeit perhaps not confidently, at least at first. I would probably find the design painful, but would find changing the design perhaps less difficult than I might do in a more typical legacy code base. On the other hand, having to write tests for legacy code gives me an opportunity to understand what the code does in a way that merely reading overly-specified tests does not do. I have to prepare myself for the possibility that having to write tests from scratch for legacy code represents a feature, not a bug, in the process.
Wow. I did not expect that. (Really.)
I Guess I'd Rather Have Disciplined Programmers
It seems that I'd rather have disciplined programmers who don't deliver tests to less-disciplined programmers who deliver even a strong suite of tests. No, this isn't a trick. ("Simple design passes all the tests, but if there are no tests, then trivially the design is simple..." No.) Yes, I'd like tests, too, but if someone can achieve great results without tests, then I have no right to impose my techniques on them. I have to say that this surprises me, because I've probably written in the past that I'd rather have the tests. It seemed like a good idea at the time. I can't account for the shift in my thinking.
In spite of this, please write tests, no matter how disciplined you paint yourself. I find that writing tests and the attendant refactoring have helped me develop a lot of discipline, and I still insist that I write tests, so if I hired you, then I'd probably insist on the same from you.
My friend Michael Bolton told me that this article reminded him of the tacit knowledge gap problem. In particular, the tests (really "checks") can only represent someone's ideas about what the system ought to do, and so will necessarily miss some piece of information that Murphy's Law ensures we will find important at some nopportune moment. In that moment, we would surely wish we had access to the programmer and not just the checks/tests.
Indeed, checks seem to encode, at best, a relatively recent snapshot (we hope) of what a sample of us (probably chosen at random) thought the product should do, limited by our ability (given our skills and energy at the time) to articulate those thoughts. Stated this way, it seems obvious that I'd rather have disciplined people than the checks themselves!
J. B. Rainsberger, "Beyond Mock Objects". I like mock objects, but I prefer avoiding unnecessary dependencies entirely. Don't inject what you can avoid.
c2.com community, "Truck Number Problem". This relates to the notion that learning is the bottleneck: in particular, hoarding information in a small number of brains slows us down.
J. B. Rainsberger, "Putting An Age-Old Battle To Rest". In this article, I present the Four Elements of Simple Design, but with a twist.
J. B. Rainsberger, "Integrated Tests are a Scam". Video, 64 minutes. How I gain high levels of confidence from microtests.
J. B. Rainsberger, "Beyond Mock Objects". This article discusses the problem of indirection without abstraction.
J. B. Rainsberger, "Surviving Legacy Code". If you need help surviving your own legacy code, then I'm happy to help.
When we say that we take care of the design "for economic reasons", I believe that we generally mean this. Note that "marginal cost" simply means "the cost of the next one".↩
I have book-length opinions on the topic Integrated Tests are a Scam, but I haven't found the energy to write them. I'm now doing that very slowly. Someday, I might write that book. In the meantime, enjoy my most recent talk on the subject.↩
I have graciously assumed that the programmers in question did not write production code that depends on tests. I realise that I take a risk in making that assumption, but I find the alternative too horrible to contemplate.↩