I don’t want to claim that story test-driven development doesn’t work, because some of my most respected colleagues teach the practice with success; however, I do want to warn people who might find themselves seduced by STDD, especially if they think of it as an easy replacement for TDD.
Allow me to clarify the two terms, TDD and STDD. To practice TDD, the programmer begins with a small, well-defined behavior they’d like to implement. Typically, they design that behavior as a method on a class, although they could get away with doing even less, then brainstorm a list of tests they might write. With such a list in hand, they run through the TDD cycle, illustrated beautifully by Bill Wake’s stoplight analogy. When the design behaves adequately and correctly, the programmer stops.
To practice STDD, the programmer begins with a story and several story tests, which I tend to call “examples”. The programmer then selects a story test, watches it fail, then test-drives enough code to make it pass. One by one, the programmer makes each story test pass until they complete the entire story.
I have been teaching people about TDD and stories for years, and have practiced STDD most of that time, in one form or another. I find the technique helpful; however, when I have pushed STDD to its limit, I have found it to guide me in directions I don’t like, which TDD has generally never done. When I watch others attempt to practice STDD, especially novices and advanced beginners, I see how they misapply STDD and lead themselves towards a Big Ball of Mud, despite what the agile community’s marketing machine says about TDD and stories. I believe the intersection of the two creates problems for those not accustomed to the different goals of TDD and user stories.
I use examples, the term I use for story tests, to show progress towards delivering a story, or feature. Broadly, I add examples to reflect increasing levels of understanding of the system to design, and as examples pass, that reflects progress towards delivering an ever more powerful system. I use programmer tests, the term I use in place of unit tests, to test my design ideas as they come to me and to help me type code in correctly. Any time all the programmer tests pass, the system works as designed, even if it does not yet do everything the business needs. Any time all the programmer tests pass, I can freely commit changes to the main line of the project’s design repository.
More succinctly, examples help us design the right system and programmer tests help us design the system right. (I prefer “correctly” there, but then I lose the symmetry.)
I often see programmers try to use passing examples as an absolute criterion to stop designing. They underestimate, in my opinion, the role of programmer tests to put positive pressure on their design. Examples, especially when written as end-to-end or integration tests (a test whose failure does not isolate the mistake to a single method), simply do not put positive pressure on a design: their high-level nature can’t constrain a design enough to support careful refactoring. For this reason, I recommend novices and advanced beginners not practice STDD until they first see or feel for themselves the impact focused, small programmer tests have on their design.
I want to leave no room for doubt: I do not mean to say that novices should avoid STDD as an “advanced practice”; but rather that a combination of novice tendencies makes STDD harder than TDD to practice well. Specifically, the novice tends to write examples as end-to-end tests, which provide too much design freedom and exert too little positive pressure on the design to guide refactoring and prevent defects. Instead, I would counsel novices and advanced beginners to focus on TDD and run the examples every hour or so to measure their progress towards delivering the story.
Read more about how to practice STDD well.