Monday 9 February 2015

"3 Strikes" Quality Monitoring

This concept came about through discussion about how Defect waste should be measured. There are a load of posts on the 7 wastes, which I will not go into now but this started a discussion that led to something interesting.

We found 3 different gates which we think could be used to actively monitor the quality of our organisations products and development processes:

1) Defect Waste - This is a measure that is purely within our sprint. If we are working on something new and we find a defect with it in our sprint during QA, we label the time to resolve the issue as defect waste. This may seem harsh but it has interrupted our flow through the development process. We had complete acceptance criteria and access to the domain knowledge that created it - so in an ideal world this should never happen.

This a symptom of other problems e.g. are developers buying into the acceptance criteria, are the criteria strong enough, are the developers even reading the criteria? This also encompasses questions around the effectiveness of regression and automation suites, which is another area the team can improve on.

2) Release Testing - A series of teams all feed into a release, which is where we realise an integrated version of our application. We typically run automated suites and exploratory testing here, ideally this is built from tests created by the teams in addition to system level tests. Anything we discover here is irritating since we now need to fix it before we can release.

I find this hard to monitor since the issues raised look like any other issue and it is difficult to pick them out - ideally we are only interested in stuff raised during testing and dates are the only data I can currently rely on. It symbolises complexities around integrating work from multiple teams and possibly improvements for teams in terms of DoD depending on the problems found.

3) Bug Reports from Customers - The last place a bug will be found is by the customer, which means the organisation has failed! It should not make it this far but that does not mean we cannot learn from it. The number of the issues being raised by the customer can be used as the final gated measure.

It has evaded both of the other gates so may well be down to customer specific data or configuration. It might inform how we guard against regression for specific customers who are using features that might be specific to them or rarely used.

These 3 measures can be used to monitor trends. Ideally, we would see all of these trending towards zero in every sprint, every release and for every customer. The combination of the 3 gates tells us more about about how we treat quality throughout the development process.

By making these measures visible we can also start to re-focus the development teams on a wider view of what we term as quality for the products we build.

No comments:

Post a Comment