BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles First Steps in Unit Testing

First Steps in Unit Testing

In addition to being a software industry best practice, unit testing is promoted by agile methodologies as a pillar for sustainable software production. According to the most recent annual Agile survey, 70% of the participants said they unit test their code.

Unit testing goes hand in hand with other agile practices, so starting to write tests is a stepping-stone for organizations wanting to go agile. The road is long, but is worth taking. In this article, I’ll cover tips on what to expect, and steps to take when starting out in order to make unit testing a part of development life.

There’s an implicit notion about effective unit tests - they are automated. Without automation, productivity tumbles. Making unit testing a habit cannot be sustained long term without it. Relying on manual testing (done by either testers or developers) doesn’t stick; under pressure, no one remembers to run all the tests, or to cover all the scenarios. Automation is our friend, and all unit test frameworks have embraced automation, as well as integration with other automated systems.

Unit testing is crucial for modern development

With tests around our code, we have a built-in safety net. If we change our code and break something, the tests let us know. The bigger the safety net, the more confident we are in knowing the code works and our own ability to change it when needed.

The major benefit of unit tests over other types of tests is quick feedback. Running suites of hundreds of tests in a matter of seconds helps the development flow. We’re forming a cadence of adding some code, adding a test, seeing the tests pass and moving forward. Moving in small steps, knowing everything is working, also means debugging time drops immensely. It’s no wonder we feel more productive with tests - there’s less time spent on bugs, while a lot more time is spent on pushing features out.

The wall of dependencies

Adding tests to a greenfield project is considerably easier - after all, the code is not there to get in the way. However, this situation is definitely not the norm. Most of us work on legacy code, which is not easily testable. Sometimes we can’t even run the code - it may require data or configuration existing only on production servers. We may need to create different setups for different scenarios, and this may require a lot of effort. In many cases, we may need to change the code in order to test it. This doesn’t make sense: we’re writing tests to have the confidence to change our code without breaking it - how can we do it safely without tests?

Code testability is a function of language and tools. With dynamic languages, like Ruby, code is considered testable as is. We can change behavior of dependencies of the code inside tests, without touching the production code. Statically-typed languages like C# or Java are more difficult.

Here’s an example: An expiration checker method (C#), that checks for expiration against a constant date:

public class ExpirationChecker
{
    private readonly DateTime expirationDate = new DateTime(2012, 1, 1);

    public bool IsExpired()
    {

        if (DateTime.Now > expirationDate)
        {
            return true;
        } 
        return false;
    }
}

In this example, there is a hard dependency in the method IsExpired on when the test runs, since the property DateTime. Now returns the actual time. The method has two cases, returning a different value based on that date. Changing the computer clock is out of the question - we want to tests scenarios on any computer, whenever we can, and without any side effects.

A possible solution to test both cases is to change the code. For example, we can modify our code to this:

public bool IsExpired(DateTime now)
{
    if (now > expirationDate)
    {
        return true;
    }
    return false;
}

Here the test can inject a different, controllable DateTime value than the one in the production code. If we can’t change the code, we can use a mocking framework, like Typemock Isolator, that can mock static properties and methods. This allows writing the following test for the original code:

[TestMethod]
public void IsExpired_BeforeExpirationDate_ReturnFalse()
{
    Isolate.WhenCalled(() => DateTime.Now)
        .WillReturn(new DateTime(2000, 1, 1));

    ExpirationChecker checker = new ExpirationChecker();
    var result = checker.IsExpired();

    Assert.IsFalse(result);
}

Existing legacy code is not as simple to change, since we don’t have tests for it. Starting out with testing legacy code, the truth is revealed: the uglier our code, the harder it is to test. Tools can alleviate some of the pain, but we need to work hard for our safety net.

And it’s not just dependencies...

Another issue we quickly encounter is test maintenance: Tests are coupled to tested code. With coupling, there’s a chance changing production code will break the tests. When tests break due to code changes, we need to go back and fix them. The fear of maintaining two code bases discourage many developers from even starting out unit testing. The real maintenance work depends on both tooling and skill.

Writing good tests is a skill acquired through practice. The more tests we write, the better we become at it, while the tests improve and require less maintenance. With tests around, we’ll have the opportunity to refactor our code, which, in turn, will make for shorter, more readable and robust tests.

Tools can greatly affect how easy or hard the experience is. At the basic level we’ll need a test framework and a mocking framework. In the .NET space, there is a wide selection of both.

Guidelines for writing our first tests

When we start out, we usually experiment with different tools to understand how they work. We usually do not do this on our real work code. But, the moment soon arrives when we need to write actual tests for our code. When that time comes, here are a few tips:

  • Where to start: As a rule of thumb, we write tests for code we’re working on, whether it is a bug fix or a new feature. For bug fixes, write a test that checks for the fix. For features, check the correct behavior.

  • Scaffoldings: It is prudent to first add tests that make sure the current implementation is working according to our knowledge or expectation. We do this prior to adding new code, because we want a safety net around our existing code before we change it. These tests are called "characterization tests", a term from Michael Feathers’ excellent book, Working Effectively with Legacy Code.

  • Naming: The most important property of a test is its name. Usually once a test passes, we don’t look at it again. But when it fails, what we’ll see is the name. So pick a good one, describing the scenario and the expected result from the code. A good name will help us identify bugs in the test as well!

  • Reviewing: To increase our chances of successful adoption, we should partner with a co-worker when writing our first tests. Both will learn from the experience, and as with any code, we’ll get instant review on the test. It’s better to have agreement on what to test, and how to name it, since this will be the base template for the rest of the team.

  • AAA: Modern tests are structured in the AAA pattern- Arrange (the test setup), Act (calling the code under test) and Assert (test pass criteria). If we use Test Driven Development (TDD), we write the test first completely, and then add the code. For legacy code, we might need another option. Once we have a scenario and a name to test, write the Act and Assert part first. We’ll need to keep building the Arrange part, as we know more about dependencies we’ll need to prepare or fake. Then we’ll continue until we have a passing test.

  • Refactoring: Once we have tests in place we can refactor the code. Refactoring, as with testing, is an acquired skill. We’ll refactor not just the tested code, but also the tests themselves. We don’t apply the DRY (Don’t Repeat Yourself) principle to tests though. When tests break, we want to fix the problem as quickly as possible, and it’s better to have all the test code in one place, rather than scattered around in different files.

  • Readability: Tests should be readable, preferably by a human. Review the test code with a partner to see if he can make sense of the purpose of the test. Review other tests to see how well their names and content differentiate themselves from their neighbors. Once tests fail, they will need fixing, and it is better to review them before that happens.

  • Organization: Once we have more tests, organization will come in handy. Tests can differ in many ways, but the one most apparent is how quickly they run. Some can run within milliseconds, others require seconds or minutes. As we work, we want to the quickest feedback possible. This is how we can progress in the cadence I talked about earlier. To do this, you should separate the tests in a way so you can run the quick ones separately from the slow tests. This can be done manually (and diligently), however in .NET, Typemock Isolator also includes a runner that does the separation automatically.

Summary

Taking the first steps in unit testing is challenging. The experience depends on so many things - language, tools, existing code, and dependencies and skill. With a little bit of thinking, a lot of discipline, and practice you’ll get to unit testing nirvana. I did.

About the Author

Gil Zilberfeld is the Product Manager at Typemock. With over 15 years of experience in software development, Gil has worked with a range of aspects of software development, from coding to team management, and implementation of processes. Gil presents, blogs (www.gilzilberfeld.com) and talks about unit testing, and encourages developers from beginners to experienced, to implement unit testing as a core practice in their projects. He can be reached at gilz@typemock.com.

Rate this Article

Adoption
Style

Educational Content

BT