Tag Archives: Continuous Delivery

Continuous Delivery Metrics: Do we need anything other than Cycle Time?

Without doubt, cycle time is the most useful metric for measuring Continuous Delivery. In chapter 5 of “Continuous Delivery”, Dave Farley and Jez Humble define it as “the time between deciding that a feature needs to be implemented and having that feature released to users”. They also mention that while this shouldn’t be the only metric you use, the others they mention: number of builds, code coverage, cyclometric complexity etc., are more concerned with the initial Continuous Integration phase of the CD pipeline rather than the pipeline as a whole. Cycle time really is the best indication of the health of your Continuous Delivery pipeline.

Now in the current project I’m working on we are currently rolling out a Continuous Delivery pipeline. Interestingly it has raised some issues with simplistically using cycle time as the main metric. The underlying assumption with cycle time is that any restrictions or bottlenecks can be solved by working on them (not much of a surprise!). But what happens when your bottlenecks are external and can’t be solved? A classic example would be when an external regulator enforces a legal requirement that code deployed in its jurisdiction is subject to their analysis. There is no point changing to “subordinate all the other processes to the constraint” when the constraint is not solvable. Since it’s not unusual to see this sort of analysis take days, and your CD pipeline could be humming along nicely but your deployments into production slam into the a requirement that stops you from deploying which certainly puts a crimp in any idea that you can release multiple times a day.

The external restriction can skew Cycle Time enough to hide other bottlenecks, the ones that we could and should be working on. One option could be to change the cycle time from deciding to implement the feature to when the code is in the environment before production (staging/pre-production/next-live). The trouble is that if you do this you completely lose the connection to the customer, which defeats the point.

One improvement to this was to record not just the total cycle time but also the number of deployments into each environment. This gave us an efficiency metric that allowed us to pinpoint where the issues were and record how our work affected them. If we imagine a simple CD pipeline of 5 environments, Development, QA, Performance, UAT and Production, which we deploy to in a serial manner, a really efficient pipeline would have values like these:

In this hypothetical example we deployed 100 times this week (for informations sake, the deployment rate at my current employers is about 2.5x higher). In every 100 deployments to dev, we see about 95 to QA, 95 to the performance environment, 90 go on to UAT and of those 85 make it to prod. Of course this is highly idealised but you get the picture. You are always going to do more deployments to the development environment and the least in the production environment. What is important is the gradient between the values. Expressing them as a percentage of dev deployments you get an efficiency ratio of 95/95/90/85. The different or gradient between 2 environments tells you how efficient you are being at that step.

So what does it look like in the real world when you have an external blocker.

The values for this are 65/60/50/05. Only 1 in 20 of the builds make it into production which isn’t great and the biggest bottleneck is the external restriction but there is also a huge difference from development to QA. It turns out that some of the tests use non-deterministic data and will occasionally fail. Of course this is a huge no-no but it’s difficult to see just how much it cost since the greatest delay was from UAT to Production.

Continuous Delivery recommends that you identify the limiting constraint on your system and really that is no more than what this does. Cycle time is hugely important in knowing the state of your CD pipeline. The truth is that recording the number of deployments give greater depth to cycle time and allow you to see how your whole process could be optimised.