A Simple Question to Ask About Estimating Tasks
I’ve been reading How To Measure Anything. This book has prompted me to rethink some of my positions on the value of estimating small tasks1 on software projects. Today, I offer you something short and sweet: one simple question designed to illuminate the value of the work you do to estimate the effort/cost of small tasks.
How much money would we pay to reduce the uncertainty of the cost of delivering this feature?
If we answer $100 and it costs $200 to produce the estimate, then we agree that we’re wasting money and we should consider other strategies, such as splitting features until we find the kernel, then building that first. This allows us to limit cost with an activity that already provides value to us.
If you don’t yet agree or you need to make a stronger case to someone influential at your job, then read on; otherwise, you can safely stop here.
For over a decade I’ve been advising groups not to bother with estimating small tasks if they could avoid it. I have proposed other ways to invest that time, energy, and money that they would almost certainly find more profitable. Estimating small tasks represented an optimization that few of the groups I worked with were ready to exploit for their benefit. I have been reasoning that the value of the estimates doesn’t justify the investment, but I hadn’t had a particularly compelling way to articulate this argument until today. That’s what has me excited: I can now say something more meaningful than “trust me; I know what I’m talking about”.
A Little More Detail
It suffices to have a reasonable upper bound for the amount that you’re collectively willing to spend on reducing the uncertainty of the cost of delivering a given feature. If you almost certainly feel very comfortable spending that much, then you don’t need to worry about it any more: start estimating! If not, then reconsider for a moment.
Moreover, the people who feel most responsible or who are most held to account for delivering the feature need to have their opinion heard on this, since they might be willing to pay more to reduce the cost uncertainty for various intangible reasons (“to get my boss off my back”, “to impress a key customer”). We want a reasonable upper bound within the project community to guide our decision.
Yes, there is a danger of someone in the group trying to manipulate the others by intentionally overestimating the value of reducing the uncertainty in the cost of a feature. If you sense this happening, then you have a different kind of problem to address. This isn’t that article.
By the way, when you compute the cost of producing the estimate, if might suffice to compute a reasonable lower bound on that cost. If you already agree that that lower bound cost is too high, then you already agree not to invest in producing the estimate. You can compute a reasonable lower bound by estimating the cost of the working session in which we gather together to produce those estimates.
A Concrete Scenario
Suppose we have 6 programmers, 1 tester, and 1 “project manager” (whoever feels responsible for delivering the project in a way that the company would label “successful”) in a working session lasting 2 hours in which we spend 75% of our time estimating the tasks for the upcoming iteration. (The other 25% is dedicated to other business.)
First, compute a lower bound on the cost of the working session: it’s at least the salaries of the people involved plus the cost of delay for the other work they’re not doing. If you can measure that cost of delay, do it; if you can’t, then first just tally the salaries. That might already be too expensive.
Since 2 hours is approximately 0.1% of a year (50 weeks × 40 hours/week), you might find this computation quite easy. Pretend that the programmers earn ¤ 1000 per year, the tester ¤ 800, and the project manager ¤ 1200. The salary cost of the working session becomes ¤ 8 (= 0.1% of (6 × 1000 + 800 + 1200)).
Next, note that we spend on average 75% of the working session estimating the cost of features, so that cost becomes ¤ 8 × 75% = ¤ 6.
Next, pretend that we have 6 features to estimate in this working session, so for each we spend approximately ¤ 1 to estimate the cost of it. Since we’re not going to watch the clock while we do it, we can relatively safely estimate the cost at ¤ 0.5 to ¤ 2. If you need more confidence, you might estimate this as ¤ 0.25 to ¤ 4.
Now the question becomes quite simple: would we pay upwards of ¤ 2 to reduce our uncertainty in the cost of delivering each feature? Let me offer you some related questions to help you answer this one:
- What would you do with that reduction in uncertainty if you had it?
- How much would it cost you to make the “wrong” choice? What’s the difference in cost between decision A (how we behave without knowing the estimated cost of the feature) and decision B (how we behave when we know the estimated cost of the feature)?
- How do you know that the value of that reduction in uncertainty is higher than ¤ 2? (Is it even higher than ¤ 0.5?)
Even if you can’t answer these questions very well yet, it likely suffices to notice whether you’d ever even considered these questions in the past. If you never have, then you’ve never had a clear basis for justifying the investment of time, energy, and money in estimating tasks during that iteration planning session.
OK… What Now?
If you broadly agree that you can’t justify the investment in estimating small tasks, then consider these activities instead:
- Split features to find the kernel, then deliver (don’t just build) the kernels first. This increases options for choosing the “bells and whistles” that your market will actually pay for.
- Write examples in order to build shared understanding within the project community about what it means to deliver each feature. This is what most people want when they use estimating techniques, so why not simply do it directly?
- Set budgets for tasks instead of trying to forecast cost. I find it more valuable to have this conversation: “We don’t know how much it costs to add the security we want to this system. How much is too much to spend? 2 days? OK. We’ll spend 1/2 day trying to do it, and then we’ll check in and decide whether we can very probably finish within the original 2 days, we should try something else, or we should just abandon ship.”
There’s more, but that probably gives you enough to get started.
I encourage groups to focus first on identifying and delivering value, then on improving the flow of value, and then worry later about monitoring and controling cost. Controling cost becomes more important in the extract phase of Kent Beck’s 3X model or starting around phase 7 of Jurgen Appelo’s Shiftup Business Lifecycle model. Jurgen compares the two in this article. People tend to control cost because they know how; they look where the light is brighter, rather than where they’re more likely to find their lost keys.
How Did I Get Here?
This sentence crystallized the idea for me.
All measurements that have value must reduce the uncertainty of some quantity that affects some decision with economic consequences. —Douglas W. Hubbard, How to Measure Anything
I have added the emphasis here. If nobody is changing the plan in response to your estimates, then your estimates are worthless, except perhaps as ammunition that a malevolent stakeholder could use to blame you for the project’s failure.
References
Douglas W. Hubbard, How To Measure Anything. I haven’t finished reading the book, but I’ve already found it valuable. It contains some mathematics, but even readine Part 1 sufficed to deliver value for me. I imagine that it will help crystallize many of the ideas that I’d been sitting with for 10-20 years.
J. B. Rainsberger, “How You’ll Probably Learn to Split Features”. Groups still struggle with how to split features effectively. I propose a model for understanding where your group currently stands and what they can expect to happen as they learn and challenge themselves to do more.
J. B. Rainsberger, “Three Steps To a Useful Minimal Feature”. Even in 2021, confusion reigns on the question of the MVP. Here I propose a concrete technique for finding the part of a feature to build first.
J. B. Rainsberger, “Free Your Mind to Do Great Work”. Articles on a variety of topics related to working better by taming our mind.
J. B. Rainsberger, “A Critical Step Towards #NoEstimates”. Maybe by 2021 I can promote articles on this topic without incurring the wrath of rabid detractors. I guess we’ll find out in the comments section!
Kent Beck, “Fast/Slow in 3X: Explore/Expand/Extract”. Kent first introduced me to 3X in 2018 and I immediately saw its potential as a generalization of Extreme Programming. When I thought about the various XP practices, I saw that some made less sense in the Explore and Expand phases, while others become critical in the Expand and Extract phases. This model can help us understand which practices, principles, and values matter most for us right now.
Shiftup, “The Business Lifecycle”. I quite like Jurgen’s metaphor of the company as a family of people at various stages of human development and the lines of business within the company each as people in different stages of their lifetime. For example, we need responsible adults earning reliable money in order to give the children the freedom to play and the grandparents the security they need as they face their own mortality.
Jurgen Appelo, “Kent Beck’s 3X Model versus Shiftup’s Business Lifecycle”. Jurgen compares the two models.