Category Archives: Testing

Checking Logback based Logging in Unit Tests

I wrote a simple post a few years ago on unit testing your logging. Checking my logs I’ve seen that it’s always been a popular post as it appears to be something that lots of people want to do.

My previous version was based on log4j and since many people have moved on to logback I thought I would update it. So here it is 🙂

And that’s it.

Why QA needs to change

It’s unarguable that Continuous Delivery has gone from being just a CTO friendly buzzword to a central requirement for a high performance delivery team. It’s no longer cutting edge to merely check in your code to source control and have Jenkins or other continuous integration box run the unit tests. You have to be able to get that code out into a live environment as fast as possible and that means Continuous Delivery. The ability to deliver code into production at will has a direct effect on your bottom line but to do this effectively you need two things, 1) understand the important areas of functionality that the customers really use and 2) be able to test these areas as quickly and easily as possible. The first is a business issue but the second boils down to automating your testing.

The trouble is that most of the industry holds on to a quality assurance process that is directly at odds with this. The reasons are mostly historical but companies have had varying levels of success in the drive to automate QA. The level varies on how highly the company values this ability. So what levels of QA do we commonly see?

Continue reading

Quick Tip – Tomcat silently fails when a Filter fails to start

I was throwing together a prototype to demonstrate some caching strategies but for some reason Tomcat was failing to start up cleanly. Annoyingly, the only error message was underwhelming.

SEVERE: Error filterStart
Jul 6, 2012 3:39:05 PM org.apache.catalina.core.StandardContext startInternal
SEVERE: Context [] startup failed due to previous errors

There was obviously some bad config or setup but lets be honest, that’s a rubbish default message. The solution was to create a file called logging.properties and put it on the classpath: so either in WEB-INF/classes or list it directly on the classpath. Maven users can put it in src/main/resources. This file should contain the following config which will output the stacktrace of the offending error.

org.apache.catalina.core.ContainerBase.[Catalina].level = INFO
org.apache.catalina.core.ContainerBase.[Catalina].handlers = java.util.logging.ConsoleHandler

This produces the more informative stacktrace below.

Jul 6, 2012 3:34:33 PM org.apache.catalina.core.StandardContext filterStart
SEVERE: Exception starting filter cacheFilter
java.lang.ClassNotFoundException: com.betfair.web.filters.CachingFilter
...
SEVERE: Error filterStart

In my case it was a simple misconfiguration of the project where I hadn’t included it, d’oh!

Continuous Delivery Metrics: Do we need anything other than Cycle Time?

Without doubt, cycle time is the most useful metric for measuring Continuous Delivery. In chapter 5 of “Continuous Delivery”, Dave Farley and Jez Humble define it as “the time between deciding that a feature needs to be implemented and having that feature released to users”. They also mention that while this shouldn’t be the only metric you use, the others they mention: number of builds, code coverage, cyclometric complexity etc., are more concerned with the initial Continuous Integration phase of the CD pipeline rather than the pipeline as a whole. Cycle time really is the best indication of the health of your Continuous Delivery pipeline.

Now in the current project I’m working on we are currently rolling out a Continuous Delivery pipeline. Interestingly it has raised some issues with simplistically using cycle time as the main metric. The underlying assumption with cycle time is that any restrictions or bottlenecks can be solved by working on them (not much of a surprise!). But what happens when your bottlenecks are external and can’t be solved? A classic example would be when an external regulator enforces a legal requirement that code deployed in its jurisdiction is subject to their analysis. There is no point changing to “subordinate all the other processes to the constraint” when the constraint is not solvable. Since it’s not unusual to see this sort of analysis take days, and your CD pipeline could be humming along nicely but your deployments into production slam into the a requirement that stops you from deploying which certainly puts a crimp in any idea that you can release multiple times a day.

The external restriction can skew Cycle Time enough to hide other bottlenecks, the ones that we could and should be working on. One option could be to change the cycle time from deciding to implement the feature to when the code is in the environment before production (staging/pre-production/next-live). The trouble is that if you do this you completely lose the connection to the customer, which defeats the point.

One improvement to this was to record not just the total cycle time but also the number of deployments into each environment. This gave us an efficiency metric that allowed us to pinpoint where the issues were and record how our work affected them. If we imagine a simple CD pipeline of 5 environments, Development, QA, Performance, UAT and Production, which we deploy to in a serial manner, a really efficient pipeline would have values like these:

In this hypothetical example we deployed 100 times this week (for informations sake, the deployment rate at my current employers is about 2.5x higher). In every 100 deployments to dev, we see about 95 to QA, 95 to the performance environment, 90 go on to UAT and of those 85 make it to prod. Of course this is highly idealised but you get the picture. You are always going to do more deployments to the development environment and the least in the production environment. What is important is the gradient between the values. Expressing them as a percentage of dev deployments you get an efficiency ratio of 95/95/90/85. The different or gradient between 2 environments tells you how efficient you are being at that step.

So what does it look like in the real world when you have an external blocker.

The values for this are 65/60/50/05. Only 1 in 20 of the builds make it into production which isn’t great and the biggest bottleneck is the external restriction but there is also a huge difference from development to QA. It turns out that some of the tests use non-deterministic data and will occasionally fail. Of course this is a huge no-no but it’s difficult to see just how much it cost since the greatest delay was from UAT to Production.

Continuous Delivery recommends that you identify the limiting constraint on your system and really that is no more than what this does. Cycle time is hugely important in knowing the state of your CD pipeline. The truth is that recording the number of deployments give greater depth to cycle time and allow you to see how your whole process could be optimised.

In Defence of the Builder Pattern

There was an interesting discussion on reddit about this blog post by Robey Pointer. First off and before I start my defence of the Builder pattern I should say that it’s a really good article and that everyone should read it. The author states that patterns are boilerplate to your code since they good ones are absorbed into languages as they develop. In an ideal world that might be true but given the glacial pace of language evolution (closures in Java anyone?) it’s also pretty unlikely to happen even in a post java world. Whether or not they they are absorbed into the language does not detract from the fact that they represent smart solutions to common problems.

The article essentially slates the Factory and Builder patterns in the Java language, two of the most common creational patterns. To be honest, I have to agree with the Factory pattern criticism but I completely disagree with the comments made about the Builder pattern. Robey makes some great points about how they could be replaced by configuration classes (I think that default parameters would solve most of these issues). What I think is missed is the usefulness of the Builder pattern for creating complex objects (the GoF book explicitly states that Builders deal with the “construction of a complex object”), especially when you need to do this repeatedly such as when writing unit tests. In fact I kind of think of this as a natural extension of the Null Object Pattern which has grown up and become a Default Object Pattern.

As I mentioned above, I think that complexity is the aspect of the code that makes the case for using a Builder over anything else. I’ve seen some devs use Builders to create immutable objects but I think that this is not a great usage or the pattern. Given that this pattern is only really suited to the creation of complex objects, a useful example needs to be complex enough to proof the point but simple enough to easily follow so apologies if I have chosen something that you think is too simple/complex.

Lets say we have a Person class. This class is fairly complex in that it has nested objects inside it (addresses, cars, pets) as well as some simple ones (firstName, lastName).

package com.bloodredsun;

import java.util.List;

public class Person {

    private String firstName;
    private String lastName;
    private List <Address> addresses;
    private List<Car> cars;
    private List<Pet> pets;

    public Person(String firstName, String lastName,
                          List<Address> addresses,
                          List<Car> cars, List<Pet> pets) {
        this.firstName = firstName;
        this.lastName = lastName;
        this.addresses = addresses;
        this.cars = cars;
        this.pets = pets;
    }

    public String getFirstName() {
        return firstName;
    }

    public String getLastName() {
        return lastName;
    }

    public List<Address> getAddresses() {
        return addresses;
    }

    public List<Car> getCars() {
        return cars;
    }

    public List<Pet> getPets() {
        return pets;
    }
}

Of course, these nested objects such as Address and Car could have further levels of nesting, containing further objects.

Part of the hypothetical application that uses the Person object involves processing them and since we are good little test infected developers, we want to make sure we have tests that tell us what happen and that we pass our acceptance criteria. Now creating objects via constructors, or worse yet setters, would give us code that dwarfed the test code. Every time we wanted to create another test, we would have to go through the same rigmarole of setting up the objects. We could create static methods to create the object but hey, that’s a Factory pattern! Code reuse would be a pain too since the slightest difference would require another method, not good.

What we need is a method of creating the object that allows it to have default values while still allowing us to easily override them, and preferably with a fluid interface. Step forward the Builder pattern.

package com.bloodredsun;

import java.util.Collections;
import java.util.List;

public class PersonBuilder {

    private String firstName;
    private String lastName;
    private List<Address> addresses;
    private List<Car> cars;
    private List<Pet> pets;

    public PersonBuilder() {
        this.firstName = "Bob";
        this.lastName = "Smith";
        this.addresses = Collections.emptyList();
        this.cars = Collections.emptyList();
        this.pets = Collections.emptyList();
    }

    public PersonBuilder withFirstName(String firstName) {
        this.firstName = firstName;
        return this;
    }

    public PersonBuilder withLastName(String lastName) {
        this.lastName = lastName;
        return this;
    }

    public PersonBuilder withAddresses(List<Address> addresses) {
        this.addresses = addresses;
        return this;
    }

    public PersonBuilder withCars(List<Car> cars) {
        this.cars = cars;
        return this;
    }

    public PersonBuilder withPets(List<Pet> pets) {
        this.pets = pets;
        return this;
    }

    public Person build(){
        return new Person(firstName, lastName, addresses, cars, pets);
    }
}

This means that we can write tests quickly and easily, setting up the objects with very little code. Returning the Builder allows method chaining which also makes for a nicer syntax and I know it’s personal preference but I like to use the prefix ‘with-‘ for my setters to indicate that they are something more than setters. In this example I also used further Builders within this Builder. It’s not something you have to do for simple objects (in fact I discourage it) but I wanted to show that complex objects can benefit from being constructed from nested Builders.

package com.bloodredsun;

import org.junit.Before;
import org.junit.Test;

import java.util.Arrays;
import java.util.List;

import static junit.framework.Assert.assertFalse;
import static junit.framework.Assert.assertTrue;

public class PersonProcessorImplTest {

    PersonProcessorImpl processor;

    @Before
    public void setup(){
        processor = new PersonProcessorImpl();
    }

    @Test
    public void shouldReturnFalseForJones(){
        Person person = new PersonBuilder().withLastName("Jones")
                                           .build();
        assertFalse(processor.process(person));
    }

    @Test
    public void shouldReturnTrueForOneCar(){
        List cars = Arrays.asList(new CarBuilder().build());
        Person person = new PersonBuilder().withCars(cars).build();
        assertTrue(processor.process(person));
    }

    @Test
    public void shouldReturnFalseForOneCarAndMoreThanOnePet(){
        List cars = Arrays.asList(new CarBuilder().build());
        List pets = Arrays.asList(new PetBuilder().build(),
                                  new PetBuilder().build());
        Person person = new PersonBuilder().withCars(cars)
                                           .withPets(pets)
                                           .build();
        assertTrue(processor.process(person));
    }

Now you could certainly do a lot of this setup with a good mocking library like Mockito but you would still find yourself wasting lines and lines of code setting up the object. Using the Builder pattern in this way not only makes your test code far shorter and more readable but it also makes it far quicker to write.

For a real-world example of a nested structure, at my current work we consume a Restful service which creates an object that is about 8 levels deep. This is not unnecessary complexity but an accurate reflection of the domain object. I cannot imagine trying to write tests for something as simple as a filter or a bean mapper on an object that complex without using a Builder.

One reason developers love to hate on Design Patterns is the fact that to many they represent “Cargo Cult Code” – code that appears to do the job that is written by people who don’t quite understand what they are doing and just lower the signal-to-noise ratio. That crappy code is written in the name of design patterns does not change the fact that they are still what they were described to be by Gamma et al, “simple and elegant solutions to specific problems in object-oriented software design”.

Test Code as a First Class Citizen of your Application

It’s not exactly rocket science or exactly a new idea but how many of us treat our test code like our application? I got to thinking about this when Stuart extended our checkstyle checks to include our unit and integration test code. That simple analysis threw up over 300 violations. The vast majority were ones that we ended up suppressing like magic number and visibility modifiers but there were plenty of minor annoyances like various import violations and whitespace violations. Checkstyle is a great starting point but ideally we would want to have our test code undergo the same sorts of static code analysis as our normal code:  cyclometric complexity, LCOM4, duplication, package tangle index and whatever else other than code coverage your tool of choice provides.

Apart from being one of the ‘ooh shiny!’ aspects of Agile (and we all know that Agile is the one true way) what it does it ensure the code quality of your test code is the same as the rest of your code. This enforces good code practices and minimises the amount of kruft that can get in your test code and when it comes to refactoring you’ll have a much easier task.

Personally I’m a huge fan of Sonar even if you have to perform little bit of sleight of hand if you have cobertura installed in your maven pom file so that it breaks the build at the test phrase (fast feedback please!) but it really does do everything you need. Once you get your test code hooked up too, you’ll get the same benefits that your application code gets and you’ll thank yourself in the long run.