Horrible horrible metrics

Metrics are a bad thing right? They are anti-Agile and an unnecessary waste of time!

In my honest opinion that is not entirely true. Ultimately metrics are just data, data handled correctly give us facts and facts help drive more accurate decision making and continuous improvement. Handled correctly they can also generate confidence in the business that will help the development team to focus on producing effective and valuable software.

Let me start by making it clear that metrics in no way replace the need to know and trust teams. Individuals and interactions over process and tooling absolutely holds true and if you are gathering metrics to use as a stick then stop. Using metrics against people will leave you with a whole pile of useless skewed data, a team of individuals driven to shape metrics rather than deliver value and a big hole in your pocket in the shape of quality costs.

So how do I use metrics to help whilst avoiding all of the above pit falls? Well I'm not going to give you a list of useful metrics to use but I will try to use my experience, opinions and some examples to help you to understand and identify the metrics that work for you.

The first thing you need to do is make sure you understand your teams, what they value and what challenges they have. Most effective software development teams will do some form of retrospective and unless you are clear on what this 'data' is telling you then don't go looking elsewhere or you may find yourself re-inventing the wheel and telling people what they already know.

The second thing to look at is the gaps. What 'data' do you need that you don't have already but will tell you a valuable story? It's important that it tells you an accurate and valuable story or you will just be wasting time.
An example of a metric that tells you very little:

  • Number of unit tests written - This is a bit like judging how good a personal secretary is based on how many words per minute they can type. Metrics like this are subject to the 'observer effect' where by if they know someone is tracking the number of unit tests written, developers are likely to include many irrelevant or useless tests just to get the number of tests up. So actually you are increasing the amount unproductive time for data which only tells you that unit tests are being created as opposed to what impact they are having on the business. You are also looking at micro metrics and in my experience looking at metrics from a micro perspective will always have a much higher cost to value ratio.

An example of a metric that tells you a little more:

  • Value delivered to customers - A lot of effective teams ask product owners and BA's to add relative value to user stories. Software is all about the value it delivers to customers after all. There are still things we don't know about the software being delivered however we can see a lot about the productivity of a team and we have a pretty good idea of how valuable the software release is to the business. You will probably also notice that for a relatively low cost you are getting much more value out of your metric.

Once you have a good perspective of what gaps you have and what data you think will give you a valuable view of those gaps, I would encourage anyone to discuss it all with the teams as opposed to trying to gather the data secretly. I don't care what level you are at, sitting down with the teams and helping them to understand why you think the metrics will be valuable, what it means to the team and what it means to the business and listening to the feedback will surprise you. What you are likely to get from the developers once they have a better understanding is likely to be:

  • Is there a better or more valuable bit of data or metric to tell the story better?
  • What's the most efficient way of getting the data? I will come on to automating metric gathering shortly.
  • Greater trust and respect - Engaging the teams and helping them to understand will ultimately avoid the perception of the data being used as a stick but also encourage them to make use of the data to improve.

So you have a good idea of the metric(s) you think will tell a valuable story, have engaged the teams and you want to start gathering the metrics? Wait, this could be really painful pulling all of this data regularly! Ah well this is where investing the right time, effort and skills could be prove its worth. Aim to automate as much of the data gathering and metric presentation as possible seeking the help of whoever you can. Trust me when I say that getting this right at the start will pay for itself many times over, very quickly.

Once you are ready to start pulling the data set expectations appropriately. Use metrics as a positive forward thinking tool to improve. To this end create a baseline and make a clear statement that you are only looking at metrics from the baseline forward. The world is always changing and it is important that you don't fall into the trap of trying to drive continual improvement based on the distant past. Focus on the here and now and how that changes going forward.
I would also start collecting the data well in advance of any expectations around reporting on the data to other groups. This will do two things:

  • It will allow you to get a good grip of whether it is actually telling you what you thought it would tell you before you have to explain it to others (think of it like a proof of concept). Don't be afraid of of dropping metrics that on reflection don't actually tell you anything valuable.
  • You will be able to identify consistent trends. It is really important when using metrics that you consider trends over a period of time. Data for an isolated moment in time is hugely susceptible to one off skews as a result of something that is unlikely to ever happen again. What you should be looking for in metrics is a trend of continuous improvement........... A positive message of valuable changes driving out a continually improving product.

Finally I would encourage anyone making use of metrics to be open minded about the story you uncover. Avoid letting preconceptions of what you might find in the metrics to drive you to manipulate the findings. They may re-enforce what you already knew but they may equally tell you a valuable story that you weren't expecting.
Understand the results, put appropriate context around the metrics and discuss the findings with the teams, but absolutely do not ignore the surprises. If you are going to invest the time into this and you want to get value out of it then be prepared to learn from all of the data not just the parts that meet a preconceived agenda. There is an argument that you learn much more from what you didn't already know than what you did know.

Hopefully I have not come across particularly pro or anti metrics. Used correctly they can be a powerful ally but misused/misunderstood they have the potential to be costly and divisive.

On a lighter note a bit of Dilbert metrics humour:

Subscribe to Testing Tackled

Get the latest posts delivered right to your inbox.

or subscribe via RSS with Feedly!

Matt Parker

16 years in software development, 12 of those in software testing, 6 years test automation and 4 years in an Agile environment. What have I learnt? There is no right way to test just a right mindset.

In fields of green
comments powered by Disqus