Monthly Archives: October 2011

Continuous Delivery Metrics: Do we need anything other than Cycle Time?

Without doubt, cycle time is the most useful metric for measuring Continuous Delivery. In chapter 5 of “Continuous Delivery”, Dave Farley and Jez Humble define it as “the time between deciding that a feature needs to be implemented and having that feature released to users”. They also mention that while this shouldn’t be the only metric you use, the others they mention: number of builds, code coverage, cyclometric complexity etc., are more concerned with the initial Continuous Integration phase of the CD pipeline rather than the pipeline as a whole. Cycle time really is the best indication of the health of your Continuous Delivery pipeline.

Now in the current project I’m working on we are currently rolling out a Continuous Delivery pipeline. Interestingly it has raised some issues with simplistically using cycle time as the main metric. The underlying assumption with cycle time is that any restrictions or bottlenecks can be solved by working on them (not much of a surprise!). But what happens when your bottlenecks are external and can’t be solved? A classic example would be when an external regulator enforces a legal requirement that code deployed in its jurisdiction is subject to their analysis. There is no point changing to “subordinate all the other processes to the constraint” when the constraint is not solvable. Since it’s not unusual to see this sort of analysis take days, and your CD pipeline could be humming along nicely but your deployments into production slam into the a requirement that stops you from deploying which certainly puts a crimp in any idea that you can release multiple times a day.

The external restriction can skew Cycle Time enough to hide other bottlenecks, the ones that we could and should be working on. One option could be to change the cycle time from deciding to implement the feature to when the code is in the environment before production (staging/pre-production/next-live). The trouble is that if you do this you completely lose the connection to the customer, which defeats the point.

One improvement to this was to record not just the total cycle time but also the number of deployments into each environment. This gave us an efficiency metric that allowed us to pinpoint where the issues were and record how our work affected them. If we imagine a simple CD pipeline of 5 environments, Development, QA, Performance, UAT and Production, which we deploy to in a serial manner, a really efficient pipeline would have values like these:

In this hypothetical example we deployed 100 times this week (for informations sake, the deployment rate at my current employers is about 2.5x higher). In every 100 deployments to dev, we see about 95 to QA, 95 to the performance environment, 90 go on to UAT and of those 85 make it to prod. Of course this is highly idealised but you get the picture. You are always going to do more deployments to the development environment and the least in the production environment. What is important is the gradient between the values. Expressing them as a percentage of dev deployments you get an efficiency ratio of 95/95/90/85. The different or gradient between 2 environments tells you how efficient you are being at that step.

So what does it look like in the real world when you have an external blocker.

The values for this are 65/60/50/05. Only 1 in 20 of the builds make it into production which isn’t great and the biggest bottleneck is the external restriction but there is also a huge difference from development to QA. It turns out that some of the tests use non-deterministic data and will occasionally fail. Of course this is a huge no-no but it’s difficult to see just how much it cost since the greatest delay was from UAT to Production.

Continuous Delivery recommends that you identify the limiting constraint on your system and really that is no more than what this does. Cycle time is hugely important in knowing the state of your CD pipeline. The truth is that recording the number of deployments give greater depth to cycle time and allow you to see how your whole process could be optimised.

Scala, Groovy, Clojure, Jython, JRuby and Java: Jobs by Language

In a previous post I pointed out that one of the more obvious recent changes in the Java landscape has been the meteoric rise in popularity of other languages for the JVM.  Some are old and some are new but JVM compatible versions of established languages with the likes of JRuby and Jython, Java-esque languages like Groovy and Scala and brand new languages like Clojure and Kotlin offer genuine options for those that appreciate the performance and reliability of the JVM but also want a different syntax.

In an ideal world all developers would be able to develop in the language of their choice. The reality is that as developers we are constrained by the suitability of the language, the tooling support and by what languages companies are actually using. Firstly, you choose the language appropriate to the domain – one that lets you do your job quickly and easily but with the appropriate level of support for your non-functional requirements like performance.  Secondly no one wants to be slogging through the coding process in a simple editor. Yes, I know that we could all use vim or emacs but being able to refactor large swathes of code easily and quickly (hello TDD!) kind of demands a modern IDE like IntelliJ or Eclipse. Thirdly, the reality of the situation is that very few of us are in the position to be able to dictate to our employers what language we should be using. Learning a language with rising popularity also means that you have a greater chance of being employed in the future (which is nice) but employers drive the acceptance of new languages.

The fact is that many companies boast about using the latest and greatest languages since it makes them more attractive to candidates. You can barely move for the blog posts and tweets of people raving about how their company has completely changed their development process with a new language but what is the real picture?

For a useful indication of industry acceptance we can go on the job trends on indeed.com. The grand daddy of language charts is Tiobe but it’s no use at this point since a) it does not provide sufficient information and b) is too easily gamed – yes Delphi dudes, we know what you did. Now before you complain, I know that using something like this is far from perfect and a long way from scientific but unless you fancy doing a longitudinal study going asking all the companies what they are using and believing their answers are real rather than marketing fluff, it’s probably good enough to be illustrative.

So what can this tell us about the how the industry sees the major language of the JVM: Java, Groovy, Scala, Clojure, Jython and JRuby*. What happens when we have a look at the percentage of all jobs that mention these languages

Umm, well… it’s pretty obvious that despite all the industry noises about other languages, Java is still massively dominant in the job marketplace with almost 3.5% of the jobs requiring Java knowledge. We all know that Java is an industry heavyweight but it is a bit of a surprise that in comparison the other languages are an indistinguishable line. Welded close to the 0 line, they would need some seriously exponential grow to start to threaten Java.

So what happens when you remove Java….

This is a lot more interesting. Firstly Jython was the first language other than Java that was really accepted on the JVM. Groovy started to pick up in 2007 and quickly became the first of the alternate languages, no doubt driven by Grails. Clojure and JRuby have never really garnered much support despite the recent rise in the last 18 months or so. I think the most interesting point is the recent increase in the acceptance of Scala. Currently third behind Jython, the gradient indicates that it will soon move into second. Comparing the grow rate of Scala and Groovy on a relative basis we see the following.

So we can see that Scala has finally crossed over Groovy’s growth rate. It’s completely reasonable to say that this could be temporary and that we should not read too much into this but there are a few data points there so it does not appear a flash in the pan.

So what can we say; while you’ll want to dust off the old Groovy text books and maybe have a look at some Scala tutorials, the best thing you can do is to keep your Java-fu in top notch order. As far as the industry is concerned Java is still the Daddy of the JVM languages and seems to being staying this way for some time.

* – I did originally include Kotlin and Gosu but since there were 0 jobs for Kotlin and only about 9 for Gosu they would only have been noise.