I’ve been using Scrum or similar ‘timeboxed’ project management methods in Software Development for 19 years (starting with DSDM in 1997). During this period I have had some spectacular successes and some more average results but rarely any spectacular failures.
So having got that out of the way I want to talk about why I think that imposing artificial ‘timeboxes’ causes almost as many issues as it resolves.
I’m going to take examples from a number of projects I’ve worked on over the last few years with the names and places changed to protect the not-so innocent.
The idea of setting a timeboxed period for delivery of software has been around for more years than I have (and that’s a lot!). All ‘projects’ have a deadline, an end point. This constraint is usually dictated by two things, a business imperative and/or a finite amount of money. All to often I see the second being the main driver for delivery, see my talk ‘Projects kill Agile Development‘ on why this is a lousy way to constrain software delivery and the bad behaviours it drives.
However, what I’m discussing here is the imposition of an artificial ‘timebox’ on a delivery team as a way of introducing ‘urgency’ and measuring progress. This principle of an ‘x week long’ timebox is most obviously demonstrated in Scrum but is present in many agile software development methods including XP1, DSDM and Crystal.
The intent of this timebox is actually to focus the team on building small, well tested, deliverables that are released to production to provide early ROI and to learn what customers actually find useful. The learning is as, if not more, important than the money made or saved.
However, in almost every ‘project’ I’ve ever worked on in large organisations this ‘release regularly’ style of iteration doesn’t happen. Instead an artificial time period is established (usually management read a book on Scrum and establish two weeks as if there were something magical about that frequency – BTW the team should decide the timebox size!). This artificial timebox is punctuated by some kind of review meeting and a retrospective, usually focusing on what went badly rather than positives that can be expanded upon.
The issues with this artificial timeboxed period are many:
- People over estimate what can be achieved and work gets compressed to the end of the period and this leads to quality issues.
- Artificial pressure is imposed by management leading to shortcuts on the work compressed toward the end of the timebox. This is most often tasks such as testing (particularly non functional testing), support materials, user guides, etc. These tasks are then either rushed or dropped all together.
- Often this timebox is perceived through the established lens of the ‘build phase’ (or if you are lucky ‘build and test phase’). This leads to tasks required prior or subsequent to ‘build’ being given less focus or being discarded altogether.
- Timeboxes are seen, not as a way to release frequently to learn if the correct thing is being built, but as a measurement of progress towards a fixed and agreed set of ‘requirements’. This is predicated on a fallacy that the senior stakeholders fully represent the user population, are prescient and can see how needs and technology will change in the future. This leads to projects that have regular, but not particularly useful, checkpoints, disguised as reviews, by management to an artificial and unvalidated understanding of the needs of the actual customers. These reviews tend to focus on functional and demonstrable features and any non functional or technical concerns take a back seat.
I have seen this ‘iterative waterfall’ approach go on for many months, or even years, before software is released into the ‘wild’ for true validation.
There have been many attempts to address the deficiencies of the ‘artificial timebox’. Most frequently, the ‘Definition of Done’ (DoD) which will define what tasks need to be complete and verified before a unit of functionality is declared ‘done’. The DoD is a useful construct but all too frequently I see it constrained by artificial organisational ‘silos’. For example the following activities or areas are excluded from the DoD;
- ‘copy’ and ‘legal’ are almost never integrated into the team so don’t form part of the DoD,
- the route to live involves handover points to other teams that restrict the frequency of delivery and are not accounted for in the DoD,
- UX is ‘outside’ the development team and is not in DoD,
- non functional requirements testing is outside the development team and not in DoD,
- user guide/help/tutorial materials and handover to a ‘dedicated support’2 team are not included in DoD
- ‘ivory tower’ architecture imposes artificial constraints on the development team without real justification or understanding of the teams needs or environment,
- technical constraints, particularly around the provision of development/test environments, mean the team cannot include some elements of testing in the DoD (again frequently NFR testing),
Another interesting symptom of this artificial timebox is the exclusion of some necessary activities.
Probably the most common of these is the ‘sprint zero’. ‘Sprint zero’ is a timebox that is introduced at the start of a ‘project’ to provide a place to carry out activities that define the ‘scope’ of the delivery and to allow less ‘agile’ parts of the organisation to provide input to the project. There are a number of issues with ‘sprint zero’ but most fall into the category of assuming that all requirements are known, well understood and immutable. I’ve never worked on a project where this has held true.
Another strange symptom I’ve seen is to move early activities that are pre-requisites of the build and test phase to their own timeboxes that precede the build and test timebox. Most frequently I’ve seen functional tests and UX work moved to their own t-1 or t-2 ‘sprints’ with a ‘Definition of Ready’ (DoR) introduced.
I find this practice extremely disturbing and unhelpful. It splits the development team into ‘them and us’ (UX/Testers and Dev’s), it treats development and test execution as more important than UX and test/environment preparation. It also makes these pre-development activities less visible. I have even seen it lead to a blame culture between the UX/testers and the developers. This is just ‘waterfall’ in miniature.
There are more obvious symptoms like:
- No attempt at a DoD so no real idea of whether a feature/story is ready to ship,
- A fixation on a story/feature being ‘complete’ when it’s been reviewed and passed it’s definition of done (a feature is apt to change even, or more especially, when it’s in production so how can you ever tell it’s ‘complete’? It may be ready to ship/deliver but that’s it!),
- Moving activities that need to be carried out to release to outside the timebox to make work ‘fit’ in the artificial time constraint,
- Timeboxes appearing on gantt charts to placate managers used to controlling time and resource who don’t understand the concept of scope management and feedback from your customers.
Frequently these artificial timeboxes can go on for months before code gets anywhere close to an environment representative of ‘live’ let alone actually released. The apparent feedback and progress lulls the development team and stakeholders into a feeling of complacency that everything is as expected, fulfills customer’s needs and will scale and perform correctly in production.
So how do we address these concerns?
Most of the symptoms above come from one of two causes:
- Organisational silos preventing the team incorporating all the skills required to ship (including timely input from the legal department, marketing, etc.)
- Considering a ‘feature’ or ‘story’ that is too large
The first takes a cultural change that is difficult, if not impossible, to drive from the perspective of an individual project or initiative and is beyond the scope of this post.
The second is something that comes with education and practice. Some people advocate making iterations smaller to try and force the team to focus on very small amounts of functionality. I find this rarely works as the ‘features’ or ‘stories’ still rarely conform to whatever timebox size has been choosen.
This is where an agile coach can help in educating the business stakeholders that releasing something very small can still be useful as a learning experience.
If we don’t use timeboxes how do we ensure that the team are focusing on what’s required, that progress is being made and that we are regularly releasing to gain feedback from the only audience that really matters, our customers.
The reason for timeboxing is to constrain what the team is working on to ensure that the team is not losing time in constant context switching between tasks, that they are seeing ‘features’ through to release. The secondary reason, for which I find the timebox is much less useful, is to provide a guide to future planning.
Another way of reducing context switching is to adopt Kanban practices and to enforce work in progress limits. If you rigidly enforce limits to the work that can be in progress at any one time and make the flow of ‘features’ through activities visible this can be used to educate both stakeholders and the team about sizing features.
If ‘done’ means ‘released to our customers’ we can use very small, incremental changes to test the waters of whether our latest products or approaches have a market.
We should measure the value of re-engineering our sofware. Re-architecting our platform to scale out under peak load may not be sexy but neither are thousands of customer complaints about our site being unavailable. Isn’t there as much value in reducing the complaints of customers and not losing reputation as gaining market share with a new offer?
Isn’t it easier to plan for future change by measuring average ‘cycle’ time and throughput to identify how long a feature actually takes to be delivered to the customer? This also has added benefit that it educates stakeholders into how to break up ‘features’ or ‘stories’ into smaller deliveries to lower cycle time and smooth out throughput. Kanban techniques ‘done right’ also ensure that the team maintains a visibility and focus on the activities required to release not just on an artificially imposed deadline of the end of a timebox.
However, I want to add a note of warning. Kanban requires discipline in both a development team and the organisational stakeholders. To do well, it requires infrastructure to support rapid change in parallel. It requires high levels of automation of testing, continuous integration and delivery. Most of all it needs the organisation as a whole to understand what the real value of their software platform is.
For all of the above reasons I most often start with a ‘timeboxed’ approach to delivery in an organisation with no exposure to agile or lean practices. I would, however, adopt some of the measures that Kanban uses to identify bottlenecks and enforce WIP limits.
I think the take away from this rather rambling post is look out for the symptoms of artificial ‘timeboxing’, keep in mind nothing is of value until it’s released and that ‘testing’ doesn’t stop in production, we need to gather customer feedback and monitor how design decisions impact on our system then design mechanisms to assign value to this feedback to identify what we need to feed into our development teams.