Tests Driven Development (TDD) is an approach to development that changes the traditional sequence of first designing a program, then coding it and then testing it. Instead, TDD works in very small increments of: coding a test, writing the program code to make it succeed, and improving the code. This results in many (often thousands of) automated small tests that can be executed in a few seconds.
For testers it is important to note that these effective automated unit tests eliminate most of the demand for manual test execution. We might therefore ask whether we need testers in a TDD team.
On the surface it seems you need ‘testers’ with or without TDD, but the reality is more complex, so let’s talk about these complexities:
- If you want to start doing TDD, it is not advisable to try to combine it with old school QA and old school functional testers.
- If you are already successful in TDD it still makes sense to look for a team member specialized in testing.
- Successful testers in TDD teams differ from old school functional testers by having a more solid technical background
Rise and Fall of QA
In Is TDD Dead? Kent Beck (KB), Martin Fowler (MF) and David Heinemeier Hansson (DHH) talk about QA and testing. They note 3 phases in the history of specialized testers as:
- Big QA: Often dysfunctional QA departments with many functional testers
- Abolishing QA: Overconfidence in programmer testing, doing without testers
- Current realization: Having some (even functional) QA is necessary
The big QA of the Nineties seems history. Many IT organizations have dissolved their QA departments and have spread their testers over Agile teams. However, in many of those teams, the testers are still doing the same manual testing they did in the nineties. Many organizations are therefore still stuck with the same dysfunctional testing they had twenty years ago.
The dysfunctionality of Old school QA lies in its excessive use of functional testers. These are professionals specialized in manual testing, but having few technical skills. Their specialization makes functional testers good in 'testing' functionality. However, old school QA has a tendency (and often a commercial interest) to also use these testers to 'check' functionality.
The defining feature of a 'check' is that it is a test that can be COMPLETELY automated (Bach and Bolton 2013). This also means that a 'check' can be done by programmers. Using a tester or a programmer for checks then seems arbitrary, but it isn't: Testers must spend much more time per bug to find, isolate, report, track, and advocate the fix. (Kaner 2001)
Using manual testers to 'check' functionality made Old School QA inefficient. It became dysfunctional when it fostered the attitude: 'never test your own work, throw it over the wall to QA' (KB and DHH in the conversation). This can degenerate in a downward spiral where an ever increasing number of testers leads to an ever increasing bug count ('Better Testing - Worse Quality', Hendrickson 2001)'
Abolishing QA is a logical reaction to the mostly dysfunctional practice of manual testing. The reason this article is not called 'Testers in Agile teams' is that the option to abolish QA is not open, if your Agile team is e.g. doing Scrum without unit test automation or works with COTS software. For working without functional testers you have to do TDD, or any method that results in automated unit tests.
In the most likely scenario, doing TDD means you have to change the skill set, habits, and often even the attitude / ego of the programmers you already have. This is not easy, and so is TDD itself: 'legacy code, proper unit test isolation, and integration tests are particularly difficult to master.' (Shore 2007) Estimates are that switching to TDD leads to months of significantly lower productivity for the involved programmer. On top of that, it requires coaching for weeks or months at the person's place of work during their regular tasks (Larman, Vodde 2008)
In my experience old school programmers and testers often live in symbiosis. The old school programmer does not want to do unit testing, and as long as the tester is around, he can get away with it. The old school tester does not want to learn technical stuff, and as long as the programmer produces enough bugs, he gets away with it. Old school programmers and testers therefore have a common interest in preventing change, and that's why I consider putting a functional tester in the same team as the programmer switching to TDD to be a bad idea.
This is an anti-pattern I have observed time and again: If you want to switch to TDD or any other developer testing, the mere presence of a functional tester in the team will likely ruin your efforts. If you want to do TDD, my advice is to remove functional testers from the team!
The realization that having some QA is necessary while doing TDD can be an unexpected turn of events. In the above conversation about TDD and QA David Heinemeier Hansson says: 'All your tests might be green, but they are not finding the real problems. As soon as you get in the real world users do things you do not expect.'
Martin Fowler applauds David, but in the same conversation Kent Beck is more careful. He does however admit that with the regard to QA 'the pendulum might have swung too far', and that if you cannot foresee all possibilities some external feedback 'certainly makes sense'.
Testing and team composition in a TDD team
We left the above conversation with the realization that with TDD it still makes sense to have testing next to the tests that are created while programming. The concepts of Agile testing have been exhaustively described in books like 'Agile Testing' (Crispin, Gregory 2009); however, it does seem debatable whether Agile Testing requires 'testers', i.e. employees specialized in testing. Google has hundreds of them, Facebook almost none.
An average company has different considerations. It has to ensure that its employees collectively have the tooling knowledge and conceptual knowledge to develop and maintain all applications and to ensure an efficient division of labor. Let's see what this means for having testers in a Java environment.
The tools for TDD in Java are JUnit and a mocking framework. The average developer will master these. However, next to enabling TDD in Java, the JUnit framework has sparked a second revolution in testing; the ability to automate not only unit tests, but almost any test.
JUnit is now also used to run: Integration tests with JAX-RS, automated acceptance tests, Selenium Webdriver tests, parametrized tests with massive data sets, etc. etc. All of this connected to CI solutions.
Next to these test tools the amount of other tools and frameworks has also exploded. One can conclude that the average developer can hardly be a master in all of the tools an average modern project uses.
Conceptual knowledge is fundamental for creating quality applications. Maintainability requires that a developer knows about clean code, a subject that takes years to master. If we want to expand our mastery, picking up design patterns, threading, and performance theory seem logical next steps.
Accurate, maintainable and fast code are crucial, but they do not even guarantee a dependable application. For that our developers also need to know about safety and security. For creating an application that users love to work with, they need to know about UX. For devising an effective way to guarantee all this, they also need to know about testing.
Division of labor is a third major concern while composing an IT department. Choosing a team of specialists, we could have a team with a security specialist, a UX designer and a tester, but this leaves little room for coders. The team would not produce much.
On the other side of the spectrum we could have a team of generalists. Unless these are geniuses, this would mean that the team spends the better part of each day studying. Such a team would also not produce much.
The conclusion of the above is that some specialization is necessary in a development team. One cannot expect every developer to master every tool, and to be a specialist in clean code, and in UX, and in security and in testing. On the other hand there is a practical limit on the number of specialists you can employ.
Forced to make a choice in specialization, it makes sense to have a test specialist: Given the choice, most developers will not research beyond unit testing and many will not even test at all. This is the reason that many developers do not like testing, or even hate it. In such circumstances attempts to switch to Agile testing require a specialist who is passionate about testing and able to implement it.
This is not unlike implementing TDD itself, it is about coaching and showing. If such a test specialist creates e.g. a service test that can be executed from the IDE, programmers will probably use it. What's more: if they think it useful they might become test-infected, start to expand it, and do this in a maintainable way. Once test-infected, programmers will continue to test, but in my experience, they will not infect themselves.
TDD: Testers with solid technical background
In the part about the rise and fall of QA I concluded that in a TDD environment that automates manual checks, there is far less demand for traditional testers without many technical skills. Later on we saw that after the introduction of JUnit and TDD, developers created a myriad of test tools that testers without some technical skills will not be able to use.
We might safely conclude that in a TDD environment we need a new kind of tester with a more solid technical background. For his activities we consider the situation where TDD is in place. For Agile testing, TDD fills the base of the automation pyramid (Cohn 2009) and the first of the testing quadrants (Marick 2003) and (Crispin 2009).
The effect can be made clear by considering the test of a form input field for an integer with a boundary and back end validation. We might throw 16 functional test cases at it: { x | boundary, boundary-1, boundary+1, decimal, locale, Z, 0, null, “”, “ “, abc, UTF-8, 2^31-1, 2^31, -2^31, -2^31-1}, but these are basically unit tests belonging in test quadrant 1 (technology facing tests that guide development).
With TDD, these cases are automated, and the tester should not (cf. above) execute such test cases. In general he should automate (testing quadrant 2 business facing tests that guide development) the verification of the presence of the field and a positive case. This can be done with a record and playback tool, but that is not a maintainable solution. Effective technology for this is programming (by clean code) in Selenium Webdriver in the same IDE the rest of the team uses.
Other tests in quadrant 2 are user story tests. These can also be automated. 'As a user of InfoQ, I want to login so I can download special content' should lead to automated tests of e.g. REST calls. As with our little test on the GUI one might use an external tool (e.g. SoapUi) here. However, programming the test to run with JUnit as an integration test ('LogInIT.java') is far more effective. Other team members can (no licenses) also run and maintain it without having to study the tool.
With the basics of functionality getting checked automatically, we get to quadrant 3 (business facing tests that critique the product): the team has created room for exploratory testing. In the above conversation David Heinemeier Hansson said that users do things you don't expect. The same goes for systems, where it is called emergent behavior. Because you do not know what to expect, Exploratory Testing (Hendrickson 2013) is the way to go here.
Exploratory Testing (ET) is based on iterating small cycles of: executing tests, learning about the application and designing new tests. These tests can initially be inspired by the very handy Test Heuristics Cheat sheet (Hendrickson 2006), but simply executing these 'cheats' is not ET. The real value of Exploratory Testing comes from its iterative character and applying knowledge.
For example: the Heuristics Cheat Sheet has 'hack the url (change/remove parameters)' as a web test. An attempt to script this or to do this without preparation is not useful. We will be more successful if we first spend some test iterations learning how the application uses these parameters, then think of (designing) a relevant test, and then test (executing). Needless to say that the ability to apply knowledge of the http protocol comes in handy with this test.
What I typically do during Exploratory Testing is: run the application in my IDE, monitor application server log, open database, monitor network traffic. Of course this shows errors that are not displayed on the GUI. Other things I might typically find are: excessive network errors and traffic, log pollution, unexpected persistence behavior, excessive / inefficient database queries, security vulnerabilities, usability errors etc.
This does not mean that with TDD in place all test work becomes very technical or tool driven. There are also very important tests to do with personae (Ambler 2003-2014), or with regard to testing UX. These tests are indeed less technical, but that does not mean they do not require in depth knowledge.
The above showed that when TDD eliminates the demand for manual functional testing (i.e. checking) the role of the tester changes. He might still have a lot to do, but his functional tests should probably be automated, and his manual (exploratory) tests would probably be more effective if he had more knowledge of technique, tooling or other subjects that are not easy to master.
So how does all this translate to the knowledge and technical skills a tester in a TDD team should have? There can be little doubt that a statement like: Agile testers tend to have good technical skills, know how to collaborate with others to automate tests, and are also experienced exploratory testers (Crispin, Gregory 2009) also applies to TDD teams.
I do however believe that the translation to specific job requirements differs between an Agile team that does TDD and an Agile team that does not do TDD. In some no-TDD teams the Agile tester might be forced to use a test tool that the developers don't use, or he might do a lot of manual checking. In a TDD team the tester is far more likely to work from his IDE, leading to technical requirements like:
- Experience with at least one programming language. (For reading / programming tests)
- Having knowledge of the command line / scripting. (Using servers and local machine)
- Having experience with databases (for checking persistence without a GUI)
Conclusions
The referenced conversation of Kent Beck, Martin Fowler and David Heinemeier Hansson was what prompted me to write this article. Anybody interested in testing should listen to their rather direct / honest statements about throwing code 'over the wall to QA' and how having no QA is better than having the old QA.
In trying to shed some light on this I started by describing old school functional testing, and how it can degenerate into mindless functional checking that does more harm than good. This not out of historic curiosity, but because of strong indications that many organizations still work this way, 'Agile' or not.
I talked next about why the combination of TDD developers and 'Old School functional testers' might be unadvisable. In the paragraph on team composition, I defended having a tester role in a TDD team, and that this is justified by the need to have somebody passionate about testing in the team.
With regard to the skills of testers I indicated that with TDD, there is no demand for old school functional checking. There is room for testers in a TDD team, but their tests require more serious technical skills.
Takeaways
If you are a tester doing manual checks, you should consider that TDD and other solutions that automate manual checks are here to stay. If your technical skills are below those I mentioned above (knowledge of and experience with: the command line, databases and a programming language), it's time to bring your knowledge up to par so you can do more interesting testing! The book 'More Agile Testing' (Crispin Gregory 2015) has extensive content about what to learn, and I seriously recommend reading it to everybody who wants to continue in testing. For all of this, it's advisable to take modules of formal education. It leads to better understanding of a subject and quicker learning, and you will be able to prove your knowledge.
If you are a team lead or a manager frustrated with testing problems, you might want to consider what is necessary for implementing advanced test solutions. That is, somebody who is capable of implementing solutions, but also passionate about testing. In Programmers as Testers? (Gregory 2011) Janet Gregory writes about preferring testers with a technical background, but not hiring them as testers if they see the role only as a stepping stone to becoming a programmer. This makes sense. If the tester is not passionate about testing, he is not going to decently implement testing quadrants or exploratory testing. On the other hand a tester who does not have the required skills, cannot implement test automation or even be fully effective at exploratory testing. In other words: Agile Testing requires both skill and passion.
References
- Ambler (2003-2014), Personas an Agile Introduction,
- Bach, Bolton (2013), Testing and Checking Refined,
- Cohn (2009), The forgotten layer of the Test Automation Pyramid,
- Crispin (2009), Agile Test Planning with the Agile Testing Quadrants,
- Crispin, Gregory (2009), Agile Testing A Practical guide for Testers and Agile Teams,
- Crispin, Gregory (2015), More Agile Testing Learning Journeys for the Whole Team,
- http://www.amazon.com/More-Agile-Testing-Learning-Signature/dp/0321967054
- Gregory (2011), Programmers as Testers?
- Hendrickson (2001), Better Testing – Worse Quality?,
- Hendrickson (2006), Test Heuristics Cheat sheet
- Hendrickson (2013), Explore it!,
- Kaner, Hendrickson and Brock (2001), MANAGING THE PROPORTION OF TESTERS TO (OTHER) DEVELOPERS,
- Larman, Vodde 2008, Scaling Lean & Agile Development,
- Marick (2003), Agile testing directions; tests and examples,
- Shore (2007), The Art of Agile,
About the Author
Maarten Folkers is a test consultant with extensive experience in (managing) traditional approaches to testing and applying modern test-related technologies. The latter ranging from programming TDD-style to build- and deployment automation, integrating protocol-level and gui-level tests in build pipe-lines, and (evangelizing for) Exploratory testing. Maarten has a Master of Laws degree and is studying to add a CS bachelor. He lives in Den Bosch, Netherlands and cares about history, cooking and running.