|

Can You Measure Software Developer Productivity?

Can You Measure Software Developer Productivity?

The cost of software development kills innovation by limiting resources available to solve problems

THE PRODUCTIVITY DILEMMA

Let’s face it – software development is expensive.  Really expensive.  It’s not hard to understand why – software development is a complicated and still-maturing industry, and as the sector grows, it actually gets more complicated, not less, because of the acceleration of changes in technologies, programming languages, and toolsets.

As a technology consultant, one who is paid to help build expensive, complex systems, I should be happier than a fanboy on a Fortnite bender about this trend, right?  Wrong – it frustrates me a great deal.  My job is to solve problems and build things that people need, and that gets harder when funding becomes a challenge for our clients.

So here’s the question I’ve been grappling with – how can we make software development more productive to reduce costs?

There are lots of things our industry has done over the preceding decades to tackle this problem:

  • Developed working methodologies to build repeatable practices – Waterfall, Unified Process, Agile, XP, etc.
  • Created design patterns to solve common problems – MVC, SOLID, GoF, and many others
  • Leveraged lower cost resources through offshoring

None of these have been a panacea.  Look at any enterprise and you’ll find competing for SDLC methodologies, loose adherence to design practices, and the common efficiency roadblocks due to offshoring.  While these efforts have been helpful in managing cost, it is very difficult to measure the effect they have really had.

MEASURING PRODUCTIVITY

What to do, then?  More than anything, the focus of productivity has to start with the most human element of all – the individual developer herself.  The focus has to be on how to increase the speed that a developer can turn a designed solution into working code with as few errors as possible.

Anyone who has been in the software industry knows there are broad ranges in developers’ productivity.   It depends on the individual’s ability to understand programming theory, their educational background, years of experience, a personal situation at the time, how much Fortnite they play, etc.

Why is this important?  Quite simply, time is money.  The longer it takes a developer to code a solution, the more it costs.  In today’s environment of nearly full employment, demand for software developers has never been higher, which brings a lot of varied talent into the picture to meet the demand.  Anyone who has hired a developer knows the productivity gap I’m talking about – hiring is an expensive proposition and no matter how much interviewing you do, and you’re never sure what sort of productivity you’ll get until that person gets to work.

Why is measuring productivity so hard?  Because a good measurement involves an apples-to-apples comparison between developers, yet they will almost never complete the same task to produce the same set of code.  Since every development task is different, we cannot establish a baseline for how long it SHOULD take to perform a task versus how long it WILL take a specific developer.  Throw in each person’s differing levels of experience, education, and general abilities with the discipline, and…you get the picture.

Does that mean we’re stuck with technical interviews, coding tests, and answered prayers to create a team of highly productive software engineers?  Not quite.  Agile practices give us an opportunity to solve the biggest challenge in measuring developer productivity – creating a baseline to measure the variance between the estimated and actual time to perform a coding task.

HOW IT WORKS

Every ALM tool – Jira, or otherwise – allows a Scrum team to create story sub-tasks during their planning sessions.  Usually, a developer assigned to a sub-task has an opportunity to estimate the time it should take to complete that task, measured in hours.  During the sprint, developers can then track the actual hours spent so the team can evaluate the variance between estimated and actual hours.

This variance isn’t particularly helpful as a productivity metric because the individual developer may be much faster or slower than the average, and their estimations likely reflect this bias.

The solution to this problem is to have all the developers on the Scrum team estimate each subtask duration, creating a proxy baseline and a more reasonable expectation of the task’s duration.  Then, once a task is assigned to the individual developer, the variance calculations can start to have some meaning.

What meaning are we to glean from this variance? When looking at large sets of variances (hundreds or thousands of tasks over multiple projects), we can observe patterns in individual developers’ productivity.  If they consistently take longer to complete a task than the established baseline, we can look more deeply at the data to find root causes and potential remediations.  Is there a skills mismatch, allocation mismatch, or something else?  Does the developer need more pair programming or training in specific areas?

If a developer consistently performs tasks in less time than the estimations, we have hard metrics to reward that individual and encourage continued productivity.  We can also look at the data to see how we might have other developers emulate good behaviors from these high performers.

IMPLICATIONS

I know I know – I can hear the complaints now.  A small group of 2-4 developers on a Scrum team estimating a task cannot be used as a valid baseline, you say.  It’s a fair point, but any leftover estimation bias from a small sample size of developers would be offset by the volume of variance data we would collect.  As a manager, I care more about the variance trends and less about the exactness of anyone variance calculation.

But wait, you say.  All of this supposes a developer will be truthful in reporting their actual duration on a task.  People lie to themselves and others all the time (just read “Everybody Lies” by Seth Stephens-Davidowitz) – if a developer knows they’ll be measured on variance, they’ll manipulate their actuals to improve their perceived productivity.

Again, fair point, but there is a self-policing solution to this problem.  An employee is generally expected to work 8 hours a day.  If a developer consistently under-reports their actual durations on a task, it would appear they were consistently working less than they should be.

Say a developer is assigned two 4-hour tasks, and he takes 1 day to complete both but only reports 2 hours of actual duration for each task.  We would see a report that shows him only working 4 hours that day.  With enough data points, we could easily spot a trend of under-reporting and take corrective action.

CONCLUSION

Why is all of this important?  As individuals, not just employees, we should all strive to improve ourselves every day.  That’s how society is supposed to work – we do things, we make mistakes, we learn from them and we grow in the process.  But we can’t improve what we can’t measure.  The method I describe is very easy to implement, as long as your team is following the Scrum ceremonies.  With simple metrics and trend analysis, maybe we can finally solve a difficult problem and leave ourselves more time to knock a few more things of that ever-growing to-do list.

Similar Posts