Key Takeaways
- Code is always testable by identifying anti-patterns and fixing them.
- Design and code testability affect the ability to automate tests.
- Design decisions are made by developers, and testers can influence them for improved testability.
- Clean code practices and testability go hand in hand so both developers and testers can benefit.
- On-going joint discussions between developers and testers can help improve testability,
- Team leaders and managers should foster the joint discussions as part of improvement processes.
When we write automated tests, we can run into problems. The tests won’t pass, or we’ll spend a lot of time making the effort. “That code is not testable”, we’d say. Mostly, that’s not true. Code is always testable, but the cost may be high, and the effort exhausting.
The good news is that we can change code to be highly testable, by identifying anti-patterns and fixing them. The better news is that we developers can make the code fit the test requirements, by having discussions with the testers, who actually test it.
Can testers really affect the way code is written?
It really depends on the relationship between testers and developers. In a cohesive agile team there’s an openness. But in many cases, testers get their “ready to test” code, weeks or months after the developers have finished programming. At this point, asking the developers to “go back in time”, leave whatever they are doing, and change what they consider “already working code” doesn’t sound too delightful.
But there are other issues that make developers less attentive to their testers’ needs. First, they believe (because of what organizations teach them) that when they push their code, it becomes somebody else’s job. They are not aware of what effort testers need to go through to perform testing. In fact, many times, they are not even aware of test plans, resources and sometimes even results (apart from the bugs).
So the distance in time, knowledge, thinking - all of these make the discussion between developers and testers not very effective -especially in terms of testability. Coming in late with requests is too late.
Code patterns that lead to better testability
There are many code patterns and anti-patterns that we know are good (and bad) for developers. Usually we look at them in terms of maintainability. But they have an impact on testability as well.
Let’s start with an easy one. Let’s say we have a service that’s calling a database. Now, if the database properties are hard-wired into the code, every developer will tell you that’s a bad thing, because you can’t replace the database with an equivalent. In a testing scenario we might want to call a mock or local database, and hard coding a connection will impact our ability to either run the code completely, or call another one. In what we call pluggable architecture it’s easy to do this, but the code needs to be written like that in the first place. That’s a win for both testers and developers. In fact, many clean code practices and patterns improve both code maintainability and testability.
Now let’s take a look at another aspect of pluggability. Our service now calls three other services and two databases. But we’re not interested in checking the whole integration. Our current test is interested in only calling the first database. Let’s say our developers have learned from their previous mistake, and components are pluggable all the way.
If you’re using Java’s Spring framework, for example, for injecting all the components, you’ll need to supply mocks and others to run the tests. But not just for the database we’re interested in - for every bean out there, we’ll need to supply a double. Some would be easy, and some would be hard to configure. And again - these are the components that are not of interest for the specific scenario - we need them just to make the test run.
In these examples (and there are many others), design and coding decisions made by the developer have an impact on being able to run the code in a testable manner, or on the ability to write a test easily. By designing a highly coupled code, testability degrades. And by hard coding database properties, again - a design decision by the programmer - testability can be eliminated. The developer could have gone with better modularization and pluggability, for better testability options. Remember that you can have almost infinite design options. But you should choose the ones that benefit quality and maintainability.
If the code is not highly testable, this means extra test code, effort and jumping through hoops will be necessary to make the tests run. If for example, the design requires an authenticated user to test scenarios, we’ll need to set up a user for each scenario, set up two-stage authentication, and delete it after that - a lot of hassle just to test how a page behaves. On the other hand, if the design allows “unplugging” (or short-circuiting) the authentication process, the automated tests can be easier to write. Many times, we don’t have time for that extra effort, and those test scenarios are either not performed, or deprioritized.
And that’s a shame. As testers, we need to provide the most encompassing information to stakeholders. If we don’t have the ability to run important scenarios, or they take too long to build, or are brittle, we won’t be able to do that.
In the test automation workshops I run, I discuss patterns in code that testers can look at and say: “This will cost me time later; we better change that now”. More testers today are code literate, and I encourage what we still call “manual testers” to learn the programming language their product is written in. My experience tells me that when testers talk with developers, with understanding of their code, the result is improvement in testability.
The easiest one I teach is when “new” is used instead of injecting a dependency. If you know that’s a “heavy” dependency, you better change it now. Yes, there’s a cost to that, but when you just catch it, it’s usually low. There may be a risk there. By modifying the code (without tests, since we do it for improved testability), we can break something. Reviewing the changes before and after making them can alleviate the risk. But having a tester to discuss the risks improves other aspects of testability.
As a final example, here’s a symptomatic smell. In code smells we have “god methods” and “god classes” to tell developers that the code is complex and too big. There’s an inverse relationship between big complex code and how testable it is. Every time. Developers need someone to point that out to them. If they write their own tests, they find out very quickly, and change it. Unfortunately, that’s not always the case.
Making code changes for testability
There’s a lot of commonality between clean code practices and testability. Pluggable architecture is good practice for both extensibility (we can replace things without a new release) and testability (we can plug a mock database). So it’s good for the developer and the tester.
There’s changing code, also for testability, as part of development, and there’s changing it “after development” (this is just our perception, since code is almost always in development). The point of view of the developer, whether it’s a “worthwhile” change or not, and sometimes the cost (e.g. removing the Java keyword “final” making the class extensible) may not seem that high or risky.
Here’s another idea of how to present the suggestion: we’re not changing the code for the tester’s sake. Making changes leads to a product that will be tested more thoroughly, and therefore we’ll know more about what we’re releasing.
After all, we can always agree that the code is not perfect. If we look, we often find bugs there. So why not make the changes that in the end lead to better quality?
The benefits are endless
The main benefit is the discussion between developers and testers. I remember way back, as an inexperienced team lead, when my project was going downhill and I didn’t know what to do. What saved the project was putting my tester and developer at the same computer. She didn’t know how to code, he didn’t know how she tested. When they started talking, magic happened.
Now, imagine how this can work for you. Discussions on complex code and systems that are changing all the time will leap the quality forward, simply by having these conversations.
But then, at a technical level, we’re talking about the ability to test the system better, and provide that information in a way that today is translated to “we didn’t have time to test”. And like I said, quality of product goes hand in hand with quality of code.
What tech leads or architects can do to foster testability improvement
Joint discussions are key. It starts with planning sessions - both for design and architecture, and for tests. Developers need to know how tests will be done, and testers need to know how code will be written. An extra benefit is they can point out risks quite early. If the changes are in a certain area, they will know what else they need to test. If the area is buggy, they can ask developers to put more unit tests around.
In retrospectives, discuss what is “hard” or “easy” to test. Define that and explore why. We can learn what similar things we’ll need to keep doing, for the next time.
Joint sessions and discussions are the basis for effective communication, but also benefit the goal of a quality, maintainable, product.
About the Author
Gil Zilberfeld (TestinGil) is an agile software testing consultant with more than twenty years of experience in development and testing. He’s the author of “Everyday Unit Testing” and “Everyday Spring Testing”. He’s a regular speaker at international conferences. Zilberfeld will give the workshop “Make it public! And other testability improvements” on code testability at Agile Testing Days 2021. The conference will be held November 15-18 in Potsdam, Germany.