I don’t think you should estimate small tasks1 on software projects, but if you insist—or if your context demands it as a condition of your employment—then you might be able to benefit from one simple trick to balance accuracy with speed: identify one reason each why your estimate is reasonable and why it is overconfident.
This advice comes straight out of Douglas Hubbard’s How To Measure Anything.
A “pro” is a reason why the estimate is reasonable; a “con” is a reason why it might be overconfident. For example, your estimate of sales for a new product may be in line with sales for other start-up products with similar advertising expenditures. But when you think about your uncertainty regarding catastrophic failures or runaway successes in other companies as well as your uncertainty about the overall growth in the market, you may reassess the initial range. Academic researchers found that this method by itself significantly improves calibration.
I believe that you can use this technique to drastically speed up those boring and painful iteration/sprint planning meetings. Rather than become mired in endless discussions about the fine details of a new feature, ask everyone to do one extra thing when they estimate the cost of a feature. I would ask them to think of both a pro and a con for their estimate, then to write them down, even if they only jot down a few words to help them remember the details. Writing the pro and con down increases the chances that they’ve truly thought about them and haven’t merely given a half-baked blink reaction to get on to the next thing. (That blink reaction might suffice, but I’d feel more comfortable if they put a little more thought into the exercise than that.) During the planning session, we might be able to use this as a shortcut to improving the accuracy of the estimates without exploring the feature in excruciating detail. We can save that step for when it comes time actually to deliver the feature.
Of course, even when using this technique, if you estimate the feature as a 2 and I estimate it as an 8, then we should probably talk about the difference. If we’ve thought about at least one pro and one con, then I feel even more confident that discussing our different assumptions would constitute valuable work, rather than debate for debate’s sake.
Allow me to reiterate that I believe you don’t need to estimate these tasks at all, but if your context demands it, then you might as well do it more accurately with less effort and in less time.
Now, some questions:
- Will you try this technique? Why or why not?
- Have you already tried this technique? Did it help you? What happened? What did you notice?
Douglas W. Hubbard, How To Measure Anything. A book that does what it claims on the cover: provides concrete techniques, some quite mathematical, for measuring things that previously seemed impossible to measure.
By small tasks, I typically mean anything smaller than about 2 months’ work for the group.↩︎