My current client is in the middle of a massive change in the way they perform testing to ease the transition to Continuous Delivery (if you’ve not come across the concept of Continuous Delivery before I strongly suggest the excellent book of that name by Dave Farley and Jez Humble). One of the things that this has highlighted is that in the domain of testing certain terms mean different things to different people: Integration tests to one team means a full test of all components in a system but to another it means a simple test between two layers of an application further complicated by the data being provided by either a live or a stub implementation.
This has hugely complicated the job of the build teams, the guys responsible for creating the build farms that the Continuous Integration servers run. They want to run as much as possible on the commit build but by necessity the build machine needs to be isolated so it cannot access a shared environment. The reason behind this is that since builds are run effectively at random, the commit build must be completely deterministic so any tests that alter shared components cannot be allowed since two concurrently running builds could access a shared resource and affect each others expected responses. of course, Ideally you would want some sort of elastic cloud where you could provision an entire environment but given the complexity of the current SOA ecosystem, that is not an option for the near future.
The build team have been asking various teams what tests can be run at commit and have been getting wildly different answers. Apart from the obvious performance and penetration testing, how do you define unit tests, integration tests, acceptance tests, functional tests, component tests and end-to-end tests?
When it was my team’s turn, after confusing the hell out of ourselves trying to understand what the build team were talking about, we sat down with them and thrashed out a categorisation of different types of testing. Once we had a common definition we could actually give a useful answer to what tests could be run at commit but also managed to come up with a common definition and scope of each kind of test.
Testing individual classes where any dependencies are mocked or stubbed within a test framework like xUnit or TestNG.
Light Integration Tests
Tests that cross application boundaries such as code to db or code to REST or Web service, but where the dependency is run on the same machine such as using an in memory DB like Hypersonic or creating an embedded web server to return stubbed XML or JSON. The entire application is not run so in a JEE app for example we might not test filters and listeners.
Heavy Integration Tests
As above but where the dependencies are real implementations.
Acceptance Tests/Functional Tests
Tests that runs the entire system but where the underlying dependencies may be stub or fake implementations.
End-to-End Tests/System Tests
Tests that run across a ‘live’ implementation. There are no stubs, mocks, dummies or fakes here, everything is the real thing.
It’s fairly obvious that the tests at the top of the list are the quickest, cheapest and easiest to run but offer the least assurance that your application works. Those at the bottom are the most expensive and slowest to code and run but best represent its behaviour. Of course the trick is finding the right amount of each type of tests that allow you to sufficiently test your app for the least amount of effort.
I’m certainly not going to claim that these definitions are the canonical ones but a couple of hours arguing this out proved to be hugely useful. I wonder how other teams out there do it?