Why QA needs to change

It’s unarguable that Continuous Delivery has gone from being just a CTO friendly buzzword to a central requirement for a high performance delivery team. It’s no longer cutting edge to merely check in your code to source control and have Jenkins or other continuous integration box run the unit tests. You have to be able to get that code out into a live environment as fast as possible and that means Continuous Delivery. The ability to deliver code into production at will has a direct effect on your bottom line but to do this effectively you need two things, 1) understand the important areas of functionality that the customers really use and 2) be able to test these areas as quickly and easily as possible. The first is a business issue but the second boils down to automating your testing.

The trouble is that most of the industry holds on to a quality assurance process that is directly at odds with this. The reasons are mostly historical but companies have had varying levels of success in the drive to automate QA. The level varies on how highly the company values this ability. So what levels of QA do we commonly see?

0 – Manual compilation, no unit tests, manual testing by QA
Before we all recoil in horror it’s worth recalling the pure horror of how things used to be with an enormous gulf between developers and QA. Thankfully this approach is lost in the mist of time for almost everyone but if has not, you have my deepest sympathies.

1 – Automated compilation, some unit tests by dev, manual testing by QA
We start to see the process become more agile. Build scripts have made their appearance. Life is a bit easier for developers with the introduction of Continuous Integration but not much different to QA who are left out in the cold.

2 – Automated compilation, high standard unit tests by dev, automation of manual testing by QA
Build frameworks that allow failing a build process that successfully compiled but did not pass the unit tests are now used. QA are using tools which theoretically allow them to sign off individual user stories while still being able to quickly regression test the entire application to ensure that no new errors have been introduced. However, the separation of dev and QA ensures that almost every change by developers results in failing QA tests. The only way to get all the tests to pass is to either stop development or to fork the codebase so that the QA’s can work on stable code. Bugs are fed back into the development code branch and promoted to the QA branch. Congratulations for reinventing waterfall and for ensuring that the ratio between developers and QA remains 1:1.

3 – Automated compilation, high standard unit tests by dev, integration tests by dev, automated testing by QA using better tools
Some of the QA workload is taken over by developers who automate obvious integration points with the rest of the system. Some more load is taken off the QA who can re-use tests with tools like Fitnesse but the phrase “dev complete” is still heard. Developers may provide QA with test utilities and abstractions like Stubs and Fakes to make testing easier and more deterministic. It’s almost inevitable that every team’s story board will have 3 or 4 times as many stories waiting for QA than are either “In Development” or “Undergoing QA”. You can do Continuous Delivery at this stage but everyone wonders why it’s so painful and costs so much in time and effort.

What do we really need?

For us to ship reliable quality code, we have to change not just the tooling but how the delivery organisation is structured. When manual testing was the norm, a separate QA organisation existed because it was more efficient and more effective to have non-developers test the code. With the levels of automation now available a separate QA organisation is an anachronism which should no longer exist. So what do we really need?

4 – Automated compilation, high standard unit tests by dev, integration tests by dev, agile testing by dev, sign-off by QA
The role of the QA has mutated to that of QABA (aka ‘a bloody good BA’) – domain experts that represent the business in the delivery team but who are also responsible for creating the acceptance criteria for user stories where the acceptance criteria are expressed as scenarios that can be easily converted into actual test code. Developers write the application code and the code that tests it, including creating any tooling. The QABA can then sign off the story on completion without having to go back to the business. Business still see new features and capabilities at weekly demos and show’n’tells but are rarely involved with the delivery team on a regular basis. If you are B2C, even better is where new code is released into production without being signed off but hidden behind feature throttles that stop the new functionality from being seen outside of the company network. That way the entire company gets to do UAT on new functionality before it hits the customers.

A fundamental pillar of continuous delivery is that all* your tests must be automated. To achieve this the QA organisation should be in the business of writing the test scenarios that the code needs to be evaluated against and for signing off that the code does this. Test code should be a first class citizen of the application and should be written by people who’s primary job is writing code – the developers. I will say it again – QA should not be in the business of writing test code.

Developers are responsible for quality and should act like it. Sometimes that means taking responsibility from the QA organisation that should never should have been given to them. Quality is too important to leave to QA. Developers need to take full responsibility for the quality of their code and they should be in the firing line if something is broken. The role of the QA is to keep the developers on the straight and narrow and the most effective way of doing this is to get them to apply their confrontational mindset to the code via the acceptance criteria used to sign off the new functionality. Those of you familiar with BDD are probably nodding your heads right now but I’m not sure that BDD is the answer. It’s the right approach from the perspective of test case creation but the tooling is currently a zero sum game – the effort you save by having natural language test descriptions run is equivalent to the amount of effort you have to expend in extending your tooling to support your test cases in all bar the most trivial cases.

If you take a look at a company that prioritizes the ability to ship code, e.g. Facebook, you will see the developer taking far more responsibility for QA than is seen in the rest of the industry. It’s time for the rest of us to catch up.

* QA will ALWAYS be needed for a wide range of testing: performance, exploratory testing and tests that are difficult/not cost-effective to automate – in-depth mobile testing is a classic example of this due to the insanely fragmented nature of that sector. What I mean here is that all your tests that are needed to sign-off deployment must be automated – don’t forget that deployment and the code being active are two different things via feature throttles and the like.

17 thoughts on “Why QA needs to change

  1. Nonya

    Right, because developers are allotted all the time in the world to do all the testing required. I can imagine how a PM and a boss/lead will tell their developers to do all the automation work and smoke testing while putting off development work that customers are already waiting on. GET REAL. QA is there for a reason.

    1. Martin Post author

      Judging by your second statement, you also seem to imagine that testing is somehow separate to development. Again, this is not what is being proposed. Developers need to be allotted the time it takes to get the story signed-off and dev complete is nowhere near signed off so why not speed everything up and get the people who are best at coding to actually write the code.

      And since your velocity goes up with a QABA and dev writing the test code, I think that the PM/boss will actually be pretty happy.

  2. joe

    In organizations where shipping products is the priority you will meet resistance to developers writing test automation. people have been beating this drum for a long time now and it has yet to catch on, and for good reason.

    You argue that we need “Automated compilation, high standard unit tests by dev, integration tests by dev, agile testing by dev, sign-off by QA”. In my experience, when you place more testing responsibility on dev, it takes longer to ship and the quality of test cases is lower. Separation of concerns allows developers to make products and testers to test them. Your argument of having developers do all the testing and then have QA sign off is ludicrous. As a tester I wouldn’t sign off on a project that I didn’t test.

    CI is an ideal to work towards, it’s a journey and only benefits the team when all the stars align and the CI loop is stable. I haven’t seen a single instance of CI working in a company-wide scenario where there exits both Java/C++/C# and HTML/JavaScript. It’s just a hard problem to solve when you start integrating so many different products within tiered architectures.

    Additionally, Jenkins is not that great a tool for CI testing. It’s good at building software, but it’s horrible at managing test runs and test results. You have to cobble scripts and plugins together into a fragile ecosystem get something more than automated builds out of it. It’s like a web application for cron and vcs. It’s shiny and approachable, you could do all that stuff with shell scripts (not that I’m condoning that).

    Here are some real solutions for you:

    Testers should be verifying customer use cases.
    Testers should be verifying specs.
    Testers should write test plans and review them in person with their product owners and development leads.
    Testers should automate tests as much as possible to make regression testing during product development as swift as possible.
    Testers should make their tests easy to run and reporting as clear as possible.
    Testers should keep in constant communication with their developers to ensure that the tests they’re working on are adding value to developers and the product.
    Testers are responsible for ensuring that the product or feature behaves as specified.
    Testers are responsible for getting test failures out in front of developers and product owners.

    If your testing is not adding value to the product by catching errors early and often then your testing is not valuable. CI is a great way to do that, but falls short where the CI system is not robust, does not communicate clearly or fails at any of the many things CI is supposed to do.

  3. Martin Post author

    You say it’s yet to catch on and yet the like of Facebook do it and it’s standard practise for the likes of ThoughtWorks.

    I also feel that you are slightly missing the point. I am drawing a distinction between test specifications and test code. The QA is still responsible for the test specifications but the implementation falls to the developer. The QA still uses their confrontational mindset in creating the test spec so no value is lost. Can you tell me why you think this is ludicrous?

    CI is an ideal to work towards, it’s a journey and only benefits the team when all the stars align and the CI loop is stable. I haven’t seen a single instance of CI working in a company-wide scenario where there exits both Java/C++/C# and HTML/JavaScript.

    Yes it’s a hard problem to solve (and I think you are referring to CD here rather than CI) but are you honestly telling me that because you haven’t seen it happen then it’s impossible? Adding the likes of Selenium, PhantomJS, Jasmine to xUnit makes this more than possible for Java/C++ and the web and to do so in a resilient none-fragile manner (hint: make your best guys work on the tooling not the code -it’s a force multiplier for the company). I have seen it done at 2 companies (although one was cheating slightly since their head of CD was Dave Farley )

    Something like Jenkins is the start of the process you are right. You need an integration with automated deployments (Chef or Puppet), a Build MetaData Service, Deployment Controllers and even a service to inhibit deployment and testing if there are change freezes or similar.

    Here are some real solutions for you…

    You solutions are common in the industry and often regarded as best practise but there are fundamental flaws that prevent automation but unnecessarily requiring manual intervention, like code branching and manual sign-off. Also, decent QA who can write good code are as rare as hen’s teeth so it is a solution that does not scale. The QA code that I have seen has been a ball of mud of anti-patterns and hacks and the effort to keep it running ended up taking more time than the application code. If you can find these people fair play to you but in two years of recruiting Automation QA, we’ve found maybe a couple.

    If your testing is not adding value to the product by catching errors early and often then your testing is not valuable.

    That assumes that your business domain is a simple one. Manually testing something like a financial exchange is a miserable business that can take weeks when done manually. Transactional systems are a nightmare to setup for QA purposes.

    CI is a start but only a start. CD is the natural goal that we are trying to get to and one that demands automated QA. Let the QA do what they are good at, writing test specifications, and let the developers do what they are good at – writing code.

  4. Pingback: Daily Morn by Raymond Li

  5. Jason Chaffee


    I don’t think Facebook is a good example, considering they often have bugging and crappy deliverables. 🙂

  6. Amir Ghahrai

    Have you heard of the role “Software Engineer in Test” or “Developer in Test”? Their role is to be competent in coding while looking at the SUT from a different perspective, from a QA mindset. In my opinion, if you get developers to test their own work, they would only verify that their code works and works as per the given acceptance criteria. I do strongly believe that there should be ample unit tests written by the dev, but when it gets to system test automation, they wouldn’t have the required knowledge to test the full application end to end.

    1. Martin Post author

      Yes, I’m aware of the role but I don’t think it’s a good one. The question is: what sort of developer is attracted to this role? Unfortunately, unless you are google or similar the answer is normally ‘a below par developer’. Good and great devs do not want to be testing other peoples code. They want to be creating awesome code themselves. You are almost guaranteeing to get bad developers if you ask for Developer who’s happy to write tests for other people’s code.

      My other point is that if you read my post, you’ll see that the developers do not have to have full end to end knowledge of the application – this is still the role of the QABA. The QABA creates the test specifications that stress this knowledge and the dev is responsible for turning into an automated test. Look at what BDD is trying to achieve for example. Tooling aside, this is fundamentally the same point I am suggesting.

      Free up the QA. Make them focus on what they are good at, testing, and stop forcing a square peg in a round hole.

      1. joe

        The question is: what sort of developer is attracted to this role? Unfortunately, unless you are google or similar the answer is normally ‘a below par developer’. Good and great devs do not want to be testing other peoples code. They want to be creating awesome code themselves. You are almost guaranteeing to get bad developers if you ask for Developer who’s happy to write tests for other people’s code.

        Wow. That’s one of the most insulting and assumptive things I’ve ever read. You obviously aren’t and have not worked with high caliber SDETs. Ludicrous.

        1. Martin Post author

          Like I said, unless you are google or similar you will have a very hard time finding quality developers happy to work on testing others people’s code rather than writing their own. Your assertion aside, why would this be untrue?

  7. Kevin H.

    Hi Martin,
    I can see a lot of what you’re saying in the way that I’m working at my current company. I’m a tester within a IT Department of about 25, split into teams of 1 QA to 2 – 4 Devs, and I’m writing as many of the Acceptance Criteria as I can before any code is written, and the Dev’s are coding all of it (whether it happens at Unit level or higher is kind of irrelevant to me as long as it’s automated), so by the time it gets to me to Test I will be mainly just doing Exploratory Testing (I’ll obviously have had regular demos, etc, before the code is deployed to test). Because I’m now becoming an expert on various pieces of the systems we’re working with, I’m also finding I’m creating more of the stories that we work on instead of that being done by the Product Managers (BAs). I’m not sure I’d ever be totally happy with the Agile Testing being owned by the Developers – I feel it’s just too much of a different mindset required in the 2 roles – but there certainly feels to be much more of a collaboration between the roles, and a much higher willingness with the Developers to automate the testing as they go along, leaving the Testers to concentrate on the final functionality test and the wider-ranging Exploratory Testing. (Management buy-in is the crucial necessity though – here the Management Team really see the Automated Testing as a necessity and not a nice-to-have).

  8. ChrisC

    Great post Martin – it’s exactly the conclusion we’ve come to. I’m amazed by the amount of negative comment it has generated.

    We treat test code as a first class asset, the same as application code. Our rule is that only somebody comfortable writing production code should write automated test code. The main difference to your stance is that there are some very technical testers out there who are quite happy writing automated test code. I wouldn’t want to automatically exclude them. I do agree with you that a sub-par developer of test code, whether they’re a tester or any other role, leads to following anti-patterns, hacks, etc. This obviously needs to be avoided.

    What we do require of a tester is technical awareness. They pair with developers during story development to ensure the correct automated tests are being implemented. They need to be able to review unit, integration and system level tests. This pairing is not fulltime and allows a single tester to cover around three developers. This ratio depends on many variables and needs to be reviewed on a regular basis.

    1. Martin Post author

      Thanks Chris.
      I could not agree more, especially “only somebody comfortable writing production code should write automated test code.”. I think that that is fantastic. I also agree that if you have good testers who can write production quality code then by all means use them (if I gave the impression that they should be excluded from coding then I apologise) but as I mentioned in an earlier comment, in my experience these people are rare as hen’s teeth so as a solution do not scale. To me this is the difference with a QABA – your team can do CD with either a technical tester or a QABA but the latter is easier to recruit and/or train and you are able to grow a company around this role.
      Thanks for stopping by and commenting.

  9. Jitendra Jogeshwar

    Been a tester for past 7 years it is difficult to accept what Martin is saying but I did like the comment from ChrisC where developer and tester pair for writing automation tests. Reason is when pairing testers makes sure that scenarios are covered and developer concentrates of writing good quality code. over the period of time both learn and evolve to satisfy each others role

  10. Guru

    First of all, i stumbled upon this thread a bit late. While you were penning down your thoughts, i was probably doing the exact opposite. QA at my previous organisation was biased by the SDET culture of Amazon (obviously most people joined from there) and i have had this dilemma as an individual contributor and as manager too. While you can still manage to hire some good coders in the test role from an industry standard point (assuming the mix we have), and they just don’t it when compared to the developers in the same organisation. I have had SDETs in my team wanting to move to dev role and i have encouraged it, but very few could even go past the interview rounds within the company. SDET was created as an intermediate role mostly composed of QAs who could code better, and it has been fuelled by the thought of having an opportunity to do something closer to a dev role and making more money. In my current organisation though, we have been having similar thoughts to what Martin has shared here. Food for thought, product managers don’t code but the role is not considered as low paying or a one having low satisfaction levels.

    Can you please take this a little further? I have been having challenges around coming up with a plan to transition. What exactly constitutes exploratory or adverse testing. How do you train your team to think of these scenarios. How useful have they been in your experience of being a high impact QA org?

  11. Pingback: Initial look at automating your testing | Raymond Li

  12. Pingback: Initial look at automating your testing | rayli.net

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.