During my travels through my own development projects and coaching others on theirs, I come across that age old tension between delivery and quality. Someone screaming ‘ship it, ship it’ when the development team are yelling ‘it’s not fit for purpose yet’.
This balancing act is frequently encountered during fixed price contracts and sometime Agile developments when the customer doesn’t really understand the concept of minimum feasible subset. In both cases the problem is one of communication. How does the customer satisfy themselves they are getting faster but good enough and how does the team determine and communicate what is ‘good enough’ and what is not?
You can’t discuss what you can’t understand or see, so making the quality criteria visible to the customer enables the conversation of ‘do you really want to cut this corner?’ to occur.
So we need to measure what quality levels are, and determine what they should be. This provides an objective basis for a discussion around the impact of compromise between functionality vs quality. That’s why I endeavour to measure test quality, coding rules compliance, non functional qualities (performance, security, etc.).
However, I always think metrics are inherently dangerous as they can lead to focusing on only what’s measured to the exclusion of other factors. This leads to ‘sub-optimisation’ of the overall system. Sub-optimisation is not just a performance issue, the ‘system’ may be the features delivered and therefore sub-optimisation on quality criteria may lead, counter-intuitively, to the delivery of something that’s not fulfilling the customers needs. So we also need to measure customer satisfaction with the features delivered. This is more subjective but it’s where Agile processes can help with reviews and retrospectives to provide feedback.
So we need to measure quality and delivery not, however, to the exclusion of other aspects of the system. We also need to ensure the measurements cover the perspective of the full audience of the system, including support people, users of varying degrees to technical literacy etc.
This is a bewildering amount of information to gather and process even in the simplest one page dynamic web application. So one technique is to involve the wider audience.
- For administrative functions, can you limit the scope to a handful of people that you involve in the testing process so they are trained by default and need little documentation?
- Can your user representative (product owner/ambassador user) draft guides and documents?
- Can you get technically literate users to write user documentation during Beta testing?
- Can infrastructure and support people be involved early in the development to write support documents and automate support processes?
I’m not sure anyone gets the balance of quality versus delivery perfectly correct but recognising that measuring quality as well as delivery and getting other perspectives from the wider audience is a positive step in the right direction down ‘quality street’ and might tell you when you can stop walking!