Most testing work requires multiple roles: The subject matter expert, the tool-smith, the analyst, a leader, and so on. James Bach, perhaps the most well-known tester in North America, once identified seven types of software testers, and those are just around activities, and do not address type or project or technology! At the beginning of my career we would have a 'test team' that could cover all these disciplines. Today that team is unlikely to exist; the company is more likely to have a single tester embedded on a team, who needs of all the skills, not just the specialty. Trying to hire for all the skills, leads to a search for the 'unicorn' - the perfect candidate who can do five things at once and who also fits the HR requirements for salary. These kind of searches slow down the team, reduce quality until a candidate is found (which is rare) - and possibly waste the time and attention of the executive team.
For example, imagine, meeting with a group to implement a high-level test strategy, then coming back in six months and finding there has been little progress because the director-level executives have been trying to staff projects. I wish I could tell you this was uncommon, but it is far too common.
Last time we talked about how to structure a large test organization, especially the tension between central control and self-organizing teams. Today we'll get tactical, and talk about how the delivery team can acquire all the skills it needs to deliver software when the number of testers are low and specialization is a reality.
Let's take a look.
Maximize Your Current Efforts
The best place to start is usually taking a look at what you are doing right, and seeing what to inch forward a little bit more. Collaboration and team training are two popular places.
Train Up
One of the outcomes from putting people together on a small team, is that you become aware of the strengths and weaknesses of your team mates. One programmer might not be very good at writing SQL queries, and the lone tester might be lacking in technical skill.
“Lunch and learn” sessions are a popular way to start addressing those weaknesses. One person is selected to speak on a topic over an hour at lunch, and everyone else on the team gets to listen, ask a few questions, and get a free lunch. At a minimum, people on the team are exposed to new ideas and know a little more than they did before lunch. Programmers can get deeper knowledge of testing topics such as understanding why a bug is a bug, and what might be good reporting habits, while the test staff gets introduced to new technical skills. Lean Coffee is what you get if you take a lunch and learn session, and flip it on its side. These are facilitated problems solving sessions that are driven by what the attendees want to learn, rather than what the speaker wants to talk about. At the end of a Lean Coffee meeting, you might have a solution to that problem that has been slowing you down for the last day.
Create Cross Functional Teams
Consider a development group made up of three or four programmers, and a single non-technical tester. If the programmers wait till the end of a sprint to show their work to the tester, flow starts looking flat till the very last minute, and surprises are found later than you would hope. Developer / tester pairing can smooth that flow out and potentially find problems earlier. While the developer is writing code, her tester friend can be either asking questions -- what happens if the user forgets to populate this field, what happens if there is a decimal in that field, how will the user know when a save is complete, will this work in Internet Explorer 9 -- or perhaps building outlines for what will become an automated check for that new piece of code.
The end result here is that the developer gets more feedback on how things might go wrong, and the tester having seen how the feature was written, is better equipped to ask questions about how it might fail and what might go wrong. Over time, your programmers will get better at handling common failure patterns that testers traditionally find, like buffer overflows or handling special characters, and the tester becomes familiar enough with a programming language to develop automated checks alongside the production code.
The Consulting Coaching Tester
In some companies, having a tester to share between several teams in a luxury. Programmers write the production code, and then add some amount of automated checking to verify they wrote what they think they did, and then there is a mish mash of tooling and build systems that handles merging code, and running automation. Yahoo is a recent example of a company that is going with the 'mostly developers' model. You could look at this and say that testing is still happening all around the running of the automation, and that they basically just got rid of unskilled testing, and you'd be right.
There is still the matter on many teams where programmers have a varied understanding of testing. The consulting / coaching tester has the skill to perform testing in many different groups, as well as the ability to raise the average skill level of testing in the programming staff.
This type of tester is on a travel team that moves between development teams as requested. There are a few ways this coach can add value. Some requests might be to pair with a developer on a new feature, to give guidance on what should be done programmatically and where a person should be interacting with the product. Other requests might be less about doing the actual testing work, and more about teaching and raising the average skill level. Another might be to teach tester skill through sessions you might see at a conference, or through games (like the infamous dice game) that parallel and illustrate the act of testing software – experiment, design, note taking, questioning, and understanding your expectations.
Each interaction between programmers and the testing coach should move the needle on testing skill up just a little more.
Improving Testability
Software testing is challenging work. Testers are expected to hunt down many different information sources and figure out which are credible, massage the product in just the right way so that problems can be found and fixed before customers do, and then in many teams (but not all), lobby developers and people in charge of the schedule to get those issues fixed. All of this takes time and effort, even more when the product is difficult to get information out of.
Testability in features make it easier for your testers to get quick information.
Here is a common scenario. You are testing a new feature to enter demographic information for a hospital patient. After entering some simple information t to get a feel for the feature, something that should ideally save time, you click the submit button and return to the patient list. When you are back at the patient list, things look wrong, very wrong. Before you added the demographic information there were at least 20 rows of data, now nothing is displaying. Now you have to spend time investigating, trying to figure out what you did that caused this nasty behavior.
There are a few things that would make this process faster. At the top of the list is good logging that includes event time stamps, the origin URL, the data sent, user information, and sometimes even an IP address. With this, you can trace back to the time you clicked save and look at the exact data you sent to see what the offender might be.
Of course, not all testing involves a person working through the user interface. In efforts to automate a user interface, tool smiths sometimes use XPath or a field label to find elements on a page, even worse is using a pixel coordinate. All of those turn into a time suck that requires a person to constantly update the script when anything about that page element changes. Creating an ID for any object a person might touch, even if you aren't planning to immediately write a check for that page, will save your future self many hours.
If you have a component architecture with an accessible API, you are in a good place. Using that API, you can test much of a program before the user interface exists and also create tooling to add data to the product that would take considerable time to do by hand.
Saving time is the name of the game for testability. Your developers will get information on problems faster, and your testers will spend more time doing what is important; testing software.
Testing as a Whole Team Activity
The ideal agile project team is cross-functional. Inside of one small group of people are all the skills needed to take a feature from paper to production. These teams also have an ideal of shared responsibility for the quality of the feature they are making. Testing is an activity performed by the whole team.
While writing the production code, programmers will use a combination of design tools like Test Driven Development (TDD) and Behavior Driven Development (BDD) along with writing unit tests to see that they are moving in the right direction. Now, there is an argument that these things are checks, answers to simple 'yes' or 'no' questions, and that would be true. But, the stuff that happens before these are run, and after when they turn up red, looks a lot like testing.
By the time the tester has a full fledged piece of software to look at, the code quality should be fairly high and most of the basic problems will hopefully have been found while those checks were being created. This leaves the tester a more challenging and meaningful job.
Whole team testing is also about involvement from outside the technical staff. Product managers assess whether or not the customers will find the new feature valuable, sales staff will be concerned about how well the product demos, and support people will need to know that the product can be learned quickly and supported well.
Your dedicated testers will be the experts, finding issues that others probably will not, but having quality and testing as part of everyone's role will create a better product in the end.
Leveraging the SME
The subject matter expert (SME) tester often comes from a non-technical education background like English, History, or the arts and comes into testing through support roles or product management. These people fill a special place in that they have a deep understanding of the software product, the business domain that product operates in, and the users and what they value.
Imagine what happens at the beginning of a sprint. Ideally, programmers look at a prioritized list of new work and start from the top. On first read, programmers spend some time with a typically overworked product manager that may or may not have had time to dive deeply into what this new feature will be. This is where the subject matter expert shines. Being an expert on the business domain puts them in the perfect place to clarify not only the value, but how the customers will want to work -- do they do some work and come back later to complete, are they in a high stress environment that requires a very simple user interface, or are they working in a data intense area that needs simplification. Knowing how people use your product and what their work is like, can mean developing a completely different feature than was first imagined.
Similarly, during actual software development testing, the SME focuses on value. Even with discussions and a set of user stories or acceptance tests, some things can still get lost in translation. For example, think of a product with a data grid that gets populated intermittently by someone that is constantly referencing data in other parts of your product. Your SME will have that behavior in mind, and realize that not automatically saving each value entered, could mean data lost if the user navigates away without remembering to save every time.
The subject matter expertise might not be able to participate in code reviews, write scripts to automate some testing work, or help developers find missing error handling, but they certainly add value to the team.
Summary
Since the introduction of agile, there are fewer testing roles, and more is expected of the people that fill them. Combine that with the rise in demand for technical skills and it can be difficult to see who fits in your team and what value they add. With some careful thought about how a person’s skill fits into a team, and how that team works together, you can end up with a stronger group of developers.
About the Author
Sanjay Zalavadia is the VP of Client Service for Zephyr, Sanjay brings over 15 years of leadership experience in IT and Technical Support Services. Throughout his career, Sanjay has successfully established and grown premier IT and Support Services teams across multiple geographies for both large and small companies. Most recently, he was Associate Vice President at Patni Computers (NYSE: PTI) responsible for the Telecoms IT Managed Services Practice where he established IT Operations teams supporting Virgin Mobile, ESPN Mobile, Disney Mobile and Carphone Warehouse. Prior to this Sanjay was responsible for Global Technical Support at Bay Networks, a leading routing and switching vendor, which was acquired by Nortel. Sanjay has also held management positions in Support Service organizations at start-up Silicon Valley Networks, a vendor of Test Management software, and SynOptics.