Every developer has their own style, habits and preferences. This unique combination leaves distinct impressions in the codebase. In some cases, these impressions are associated with positive and negative outcomes. As developers learn new skills and focus on their personal development process these impressions change.
We recently added the Contributor Overview dashboard that focusses on a single team member and allows you to see the changes in your development process over time. We have had a lot of feedback and questions on this, especially about when, or if, we would add the ability to compare team members. This feature does not exist and there is a very good reason why DevMetrics does not offer a comparison between team members.
Apples and Oranges
You’ll notice very quickly that the comparison option stretches only as far as comparing the same contributor to themselves over time, and not to another contributor. This is an important design choice, as comparing contributors solely on repo activity provides no insight or a fair comparison in any way. DevMetrics does not track aspects such as code quality, work item completion, project phase or team role, all of which are important factors when comparing contributors.
Instead the aim here is to spot trends over time, identifying areas of improvement and being able to see improvement, or the lack thereof over time. This brings to light another important factor, that is applicable to all metrics tracked by DevMetrics: There is no stick here, and DevMetrics should not be used to compare developers to one another.
There is no stick
The metrics tracked by DevMetrics are only a part of a much bigger picture. The bigger picture is attained by looking at all aspects of a software development project, from financials, time tracking all the way to static code analysis and now repo activity. Granted, DevMetrics is a part that has been missing up to now, but it is still only a part of the picture, and the aim is to assist you to ask better questions, and not to take numbers in isolation.
A great example of this is code activity. Code activity is calculated by an algorithm, but essentially boils down to the number of lines of code. Taken in isolation, you can claim that someone who has an average daily code activity of 800 is better than someone who has an average of 400. This is just not the case. Those 400, lets call them lines of code, could be infinitely more complex or efficient than the 800 of another contributor. Or it may be that the second developer is spending more time helping team members than the first.
This is why we don’t subscribe to the notion of leaderboards or “Top x Developers”. There are just too many factors to take into account when making the claim that one developer is objectively better than the next.
If the numbers don’t mean anything then why measure it?
The answer to this question is threefold:
Within a greater context, the numbers start to have meaning
Let’s take the example of commit complexity. If you’re in a feature phase, complexity will naturally be higher than in bug fixing or maintenance phase, however one rule always remains: you want frequent, small, low complexity commits as these are less prone to defects. By combining commit frequency, commit size and complexity we get to a picture that means something.
If commits are often large and complex that means that there is room for improvement, steps can be taken to rectify this, and you can track the effectiveness of steps taken. The results should show in other metrics as well, for example the ratio of feature commits to maintenance commits should change over time, indicating less defects because of better development practices.
There is a wealth of information hidden in trends. As noted before, comparing contributors to one another is meaningless, but if you look at a single contributor’s trends over time, it starts to reveal a different picture.
Is code activity suddenly falling off a cliff? Or complexity growing over time? The answer may or may not lie in the data, but you can spot the problem long before it hits the fan, and it allows you to ask the right questions straight away.
Asking the right questions
Often times team members start to suspect that there is something not quite right with a project, and they’re not sure what questions to ask to get to the bottom of things and wrong questions usually lead to wrong answers. Having an understanding and data to back it up adds immensely to team members’ ability to ask the right questions and get to the bottom of things much quicker.
Being able to spot problematic trends long before they become real problems, and asking the right questions backed by real data allow you to address issues long before the problem manifests itself in buggy software or mountains of technical debt.
Probably the most important part of all of this combined, is to implement real, actionable change to address whatever it is that you find, and to track the effects of those steps. Reevaluate often and iterate until you get to the desired result. It may take many iterations, but the result is a net positive outcome for all members of the team.
The myth that measuring software productivity is a myth
Development productivity can and should be measured, and if done correctly it is to the benefit of the entire team. Yes, it is different and more complex than something like manufacturing, and yes, it is a more involved process whereby many factors need to be accounted for, but the results are well worth the effort for both tech and business.
Here are some guidelines to help you along the way:
- Measure team members against themselves, and when comparing developers, take every single aspect into account not any one specific metric or set of metrics in isolation.
- Look for trends and changes in trends to spot problems or areas for improvement.
- Have access to the right data and form an understanding of the data to empower you to ask the right questions.
- Don’t look for the stick. Measuring should serve to build trust within a team, not break it down.
We’ll be sharing interesting patterns and metrics we uncover on this blog, so keep an eye out. In the meantime, make use of the free trial to see what insights your repo holds that are waiting to be uncovered.