For one reason or another, software development over the years has taken inspiration from the construction industry, whether in the form of design patterns or the Gantt chart. Nevertheless, there are obvious differences such as the fact that suspension bridges cannot easily be refactored, and are hard to reboot when they crash.
One other similarity between construction and software development is that it is easy to estimate how long something will take, and what it will cost, if you have built it before. The latest fashion in construction, posh pre-fabricated homes, have a fixed cost and highly predictable time to build. Furthermore, their quality can be guaranteed. At the other extreme, GaudÃ’s Sagrada Familia is fabulously over budget and still under construction a century and a quarter after it was started.
Engineers and architects spend a great deal of time studying previous projects. In this way, when presented with a new commission they are able to pull out something similar that has been done before, and to a greater or lesser extent tweak it to fit requirements. Such projects tend not to run over budget. Despite borrowing from methodologies such as lean, used by manufacturing companies to reduce waste and deliver value in production-line based industries, professional services companies tend to work at the GaudÃ end of the scale. However our clients are not normally religious devotees with an inexhaustible supply of time and money. So probably the most pressing question we face is how to quantify and manage risk. One of the major sources of risk is development running over time or budget – in other words, poor estimation. Sadly, much of the estimation we do is wrong – not usually by an order of magnitude, but often enough to give us nasty headaches and the occasional ulcer.
In order to estimate, you have to break things down into smaller bits. One important rule of thumb when estimating is to make sure everything you’re estimating is about the same size. This also has the advantage that once you’ve broken your work down in this way, you can basically just count up the pieces of work thus enumerated to obtain an estimate. You can then roughly multiply by a constant factor once you establish how long a piece of work takes. This implies you are not estimating the total time your project will take, just how much work there is. Another guideline when estimating is to break a project down into pieces of work that you have done before, which you can thus estimate with a high degree of confidence.
The best estimates are obtained in situations when you have done something very similar before, or can use off the shelf software or components to do something. Conversely estimation goes wrong when we forget chunks of functionality, deal with domains we don’t fully understand, work on projects with ill-defined goals, integrate with external systems, or omit infrastructural tasks and time for refactoring. The most valuable asset when doing estimates is people with relevant experience, and artifacts from similar previous projects. In my opinion, the best thing that can be done to institutionalise good estimation (in the sense of making it repeatable rather than going insane) is to decide what artifacts and metrics to preserve from projects with this goal in mind.
One of the best quotes I’ve read on software in the last few weeks is this: “The goal is to transform software engineering from a craft to an engineering discipline”. Although the speaker, Alex Stepanov, is talking about the C++ Standard Template Library, we need to take some more lessons from engineering in order to find a vocabulary and methodology to model the risks in our projects better. This will also help us quantify the return on investment (RoI) we provide. ThoughtWorks are expensive people to hire. Large companies normally have a procurement department which decides which vendors to use. As Sriram Narayan pointed out to me today, most of them imagine IT vendors to be providing essentially a manufacturing service, and hence their first question is “what RoI do you provide that justifies the cost of hiring you?”
To me, one of our leading differentiators is our experience working in the GaudÃ space, and doing it well. We even take on fixed bid projects in this area, and hence eat our own dog food in terms of being prepared to quantify risk. We do this because we know our analysts and developers are the smartest in the business, and will be able to ask the right questions during the project inception phase so as to avoid major problems down the road. However we can only quantify this once we have a model – the same one we can use to assure our estimation process.
To create such a model we could take artifacts from previous projects such as the original requirements stories produced during inception along with estimates, the complete list of stories actually produced during the delivery phase with estimates and the actual time taken. We could then produce a quantitative analysis of where we were right, where we were wrong, and how to do things better next time. With information like this from a few projects, we could then abstract a common baseline and refine our methodology. Analyzing project retrospectives, although generally more qualitative than quantitative, would also help us discover the main risk areas in our projects and what we did to minimize them.
Along with this model, we also need to ensure we apply a methodology to control delivery risk. One of the best ideas I have seen in this space comes from DSDM. They have a method called MoSCoW, where requirements are labelled as being “must have”, “should have”, “could have” or “won’t have”. Crucially, the proportion of requirements that are “must have” should be around 60%, with around 20% “should have” and 20% “could have” (thanks to Marco Jansen for telling me this). This latter point is often missed out in descriptions of MoSCoW, but is essential because otherwise you have no way of managing risk. It is also only a starting point, because a more accurate assurance process would tell you how best to set these percentages for a particular project.
When we have methodologies like this for the inception phase of projects, including estimation, we will be a big step closer to saying we have an assured engineering process which allows us to accurately and repeatably assess and manage risk.