Sometimes The Ax Is Sharp Enough
OH A product owner say
— John Miller (@agileschools) May 10, 2018
“I am so busy splitting stories I have no time to talk with actual users”
I’m going to use the term mechanism in this article to refer to part of a system (in the systems thinking or Theory of Constraints sense). I could have called this a “machine”, but I worry that if I did that, then some of you might interpret that as dehumanizing, since these mechanisms include people as well as equipment. Please hold your cards and emails. I recognize the risk in the abstraction, and I ask you interpret me generously. At worst, I mean to illustrate my point (in part) even if we ignore the typical complications introduced by the fact that this work involves people. When we turn our attention to the psychological aspects of the situation, I believe that my point strengthens, rather than weakens.
At first blush, we might interpret John Miller’s tweet as a variation on the old saw “I’m too busy cutting down this tree to sharpen my ax”. We might infer that clearly the product owner should be talking to actual users rather than wasting time splitting (the “wrong”) stories. This feels right. I definitely don’t want to argue against its intent, but instead to alert you to the risk of falling into the territory of context-free best practice. (I note the irony of following a context-free best practice such as “always take care not to follow context-free best practice for its own sake”. Maybe Russell’s Paradox magically helps us here. I don’t know.) So should this product explorer focus on splitting stories or talking to actual users? What advice would you have for them?
I don’t know what John intended to imply by his tweet. That doesn’t worry me here. I would like to address a common, worrisome interpretation, and not his intent.
Understanding the Situation
Delivering a product iteratively and incrementally involves (among many other things) balancing two key activities, which I’ll broadly call “exploring features” and “publishing features”. Here, “exploring features” means the activities involved in deciding and understanding which features to try to build, while “publishing features” means the activities involved in delivering those features into the market. That means that talking with actual users and splitting stories both fall under “exploring features”, but talking with actual users lies more in the “input” side of exploring features, whereas splitting stories lies in the “output” side of the activity. Splitting stories has more to do with feeding the “publish features” mechanism, while talking with actual users has more to do with feeding the “explore features” mechanism. Yes, this simplifies the situation considerably, but I need to simplify a little in order to start to understand.
Product explorers (the community of people who explore products and features, which includes Scrum’s “Product Owner” role) typically feel pressure to feed work into the “publish” mechanism, but in many (most?) environments, the bottleneck lies inside the “publish” mechanism. (Yes, I know. Please wait.) Theory of Constraints tells us to focus on improving throughput at the bottleneck. Among the strategies available to us, we might try to stop feeding defective work requests into the bottleneck, since defective work requests lead to defective intermediate results (doomed-to-fail investment and wasted operating expense), and then we risk feeding that defective work into downstream activities and so on and so on. Splitting “the wrong” stories certainly sounds risky, but we can’t conclude immediately that this risk always becomes a problem.
Assessing Risk
If our market trusts us a lot (we list goodwill on balance sheets for a reason), then we can afford to let this defective work proceed all the way out to the market as published features. Maybe the market reacts to these features by yawning, or maybe with mild annoyance or bemusement. We might get away with this, even though it would almost certainly burn some amount of the trust we’d earned, which we would then need to invest in replenishing. (More unnecessary investment and wasted operating expense, which means going farther away from “the goal”.) But what if our market doesn’t trust us much? Our best case (!!) comes from stopping the defective work in process before it goes “too far”, resulting “merely” in throwing away significant amounts of time, money, and energy before anyone outside the building sees the pointless work we’ve done. And that assumes that we don’t fall prey to the sunk cost fallacy. If we don’t stop the defecting work in process, then we risk alienating our market, losing current customers, driving up the total lifetime cost of our average customer, and losing profit directly. We train our market to expect defective features and products from us. And that assumes that we notice our market’s reaction. They don’t always tell us why they leave us.
Here, I mean “defects” and “defective work” more generally than the typical understanding as software “bugs”. I include features that work correctly, but solve problems that the market generally doesn’t have. For my purposes here, “defective work” means any work that, with a reasonably small investment, we could identify as work we would rather throw away than complete.
Not only does defective work increase doomed-to-fail investment, but it also denies service. A bottleneck activity processing doomed-to-fail work requests forces other requests to wait. At least some of these requests have a greater potential for profit than the defective work request that the bottleneck is working on now. This both wastes money on doing “the wrong” work, but also risks reducing the profitability of “the right” work by delaying it. If this part of the system had spare capacity, then we might not worry as much, but when the bottleneck lies there, we probably don’t worry enough.
Remember: an hour lost to the bottleneck equals an hour lost to the entire system. That costs many, many currency units. Even more than that.
The whole situation seems rather fraught. Clearly, we need to question the need to split stories at the expense of figuring out which stories we ought to try to split at all, but we can’t use this as an excuse never to commit to trying to build and publish a story. So what should our poor product explorer do? Should they talk to real users or continue splitting stories? I can’t tell from here, so I can only draw people’s attention to a handful of questions that they should consider asking themselves and each other.
- How much do we, the entire group, fear letting the build-and-deliver mechanism sit idle? How much does this fear interfere with our ability to decide how to balance choosing stories from preparing them for people to build and deliver?
- How much do I, the product explorer, prefer splitting stories to talking to real users? Do I simply know how to split stories better? Do I prefer the work because it feels more concrete? more definite? more likely to lead to a tangible result?
- Within our small system, where does the bottleneck currently lie? With building features? With publishing features to the market? With exploring features? With building a shared understanding of the details of these features? With understanding how the market reacts to the features we publish?
- Within the larger system (assuming that our group doesn’t comprise the entire organization), should we interpret our entire small system as spare capacity or does the larger bottleneck lie with us? If we have spare capacity, then we can afford to waste some time, money, and energy, but where else might we invest our spare capacity? What else could we invest in learning that might lead to better results than either splitting these stories or talking to real users? And how much will it cost us in psychic energy dealing with the stress our stakeholders will feel when they see us not looking busy enough?
I would want to answer all these questions before trying to advise this product explorer to continue splitting stories, to drop everything and focus on talking to actual users, or even to drop both of these and do something else entirely. Or maybe it doesn’t really matter right now. I would at least like to feel like we’ve made this decision more intentionally than doing whatever the loudest voice or the highest-paid person has demanded in the moment or, perhaps worse, following the best-practice advice embodies in some aphorism, no matter how satisfying it sounds nor which wise person once said it.
References
Tom DeMarco, Slack: Getting Past Burnout, Busywork, and the Myth of Total Efficiency. Producing more-profitable results means letting some parts of the system remain idle some of the time. Don’t demand busyness. Reward results, not busyness.
Eliyahu Goldratt, The Goal. Let the throughput at the bottleneck guide our strategy; where we have spare capacity, we don’t have to use all that capacity all the time.
Gene Kim, Kevin Behr, and George Spafford. The Phoenix Project. If you’d rather learn about Theory of Constraints by reading about it in specifically the context of software development, then you might read this before reading The Goal.
Patrick Lencioni, The Five Dysfunctions of a Team. The fundamental importance of vulnerability, meaning the psychological safety to say things like “It’s OK if the programmers have no story to work on for the next few days while we figure out which stories to work on”. This lies at the foundation of cooperation and collaboration.
Dale Emery, “Motivation”. We will feel more motivated to do the things that we know how to do, when we can foresee the results, and when we want those results. This sometimes tempts us to hide in familiar work, even if bottleneck theory suggests that we do something else.
Tom DeMarco and Tim Lister, Waltzing with Bears. An introduction to managing risk in the software world.