Following are the most relevant excerpts from the interview with James Bach at Oredev 2008. He covers topics like: engineering, why we should be telling success stories, opening our minds to other scientific domains, automated testing and exploratory testing.
James compares the current Agile movement with the Renaissance one:
After a period that seems to me of stasis in the 70s and the 80s, in the 90s things began to break up and this new concept, which we now call Agile, cyclic view of the development, began to talk about the practice and is now quite popular, although there still is the so called traditionalists who continue to push in the other direction, continue to think that the Agilists are just hackers and crazy people who are after destroying civilization, just as the traditionalists said about the original Renaissance thinkers “You are out to destroy civilization! The entire social order will collapse! The great chain of being will become unraveled and the world will be in flames”, which is pretty much what happened with historical Renaissance. Of course, I think that makes sense because when you change the new ways of thinking and you try new things out, there is a period of chaos – that's what learning is about! Renaissance thinkers are the people who punch us into chaos, after periods of stasis and through that period of chaos we build new and more productive things.
Asked what he thinks about engineering, James pondered that it is the art of making trade off decisions while trying to solve certain problems by designing, not the best, but a satisfying solution for them:
Design is a satisfying process, which means that you don't have the ability to optimize to the best solution. What you do is you crawl toward a good enough solution using a variety of characteristics surrounded by ambiguity and uncertainty and you never know for sure if you have done an excellent job, but you could just take it to the ground and try it and other competitors may try to do it better. That's engineering!
In the same context, he does not think there are “best practices” in software engineering because there is no model to establish what “best means”. There are just practices used while trying to solve problems. Furthermore, the process of solving various problems in the software industry is not a deterministic one but a heuristic one:
In testing, in developing, we are surrounded by problems which do not have formulas to solve them. So, we must get in the skills to use the tools properly and never let ourselves think that the tool is the solution. It's the tool plus the human – this is the solution. Otherwise you run into this problem, for someone, who is a good tester, tells another person, who is not a good tester, “The solution is right in that test”. The tester then thinks “OK, if I write down my stupid box, I'll be as good as you, because I’ve already done what you told me to do.” In our industry we need to move from focusing on artifacts and tools to focusing on skills of using artifacts and tools, to talk about those things together – that's the thing that I've been trying to do for years and years, to get people to refocus to humans so that we can talk about heuristics instead of fooling ourselves into thinking that we really serve the procedures.
James thinks that we, humans, like stories and we should be telling success or failure stories across organizations. Some argue that managers do not want to hear stories, they want numbers. James disagrees:
No, they don't! Management thinks in stories and sometimes their stories involve them, but what I find management's concerned about is nightmare scenarios. “What if this terrible thing that happened to Intel happens to us? Or what if we ship with this bug? We never want to have that happen again. Let's make sure it never happens again!” That's thinking in stories, that's having examples, I’m talking about examples and using those examples as motivation. …
Celebrating success, celebrating failure is focusing on using stories to drive learning in organizations. A lot of people don't work on this.
James also considers that it is very important to open our minds to other scientific fields. There are solutions for many problems, solutions already discovered and used in mechanical, chemical or other domains, that we might learn from. He gave the example of randomly picking up and reading a knitting book:
So, I went to the knitting section of the books. - I don't knit, I don't want to knit! I have no interest in art and crafts whatsoever! - Going to the knitting section is a completely new experience for me. I pick out an advanced book on knitting and I just start looking through. The first thing I see is this incredibly complex mathematical engineering diagram. I couldn't believe I found that in a knitting book! It looked like it was some sort of diagram of a molecule or a protein or something and I looked more closely and it's a knitting pattern and there is a procedure that goes with it. The procedure is incomprehensible, it's full of abbreviations and I realized that the people in the knitting community have learned something that the people in software testing need to learn and that is they've learned that it's OK to write a procedure down, but it's incomprehensible to read unless you are properly trained. We still have this idea in the testing world that every test procedure must be understood by any 12 year old who can read and write. What that causes us to do is to write simple-minded test procedures, that are hard to maintain, but when I tried to read this procedure of how to knit particular handkerchief or whatever it was, I realized that I had to pick up the Knitting for Dummies book that was right next to this and I needed to learn something about knitting so I could learn to read the diagrams so that I could take advantage of this very concise description of a sweater to be knit.
James loves testing and dislikes programming. He thinks one needs a different type of focusing to do testing compared to programming:
A developer's job is to think about a way that makes things work, they need to be focused. A tester's job is to see 1000 ways things can go wrong. We need to be defocused.
While many companies, including major ones, consider that understanding the technology is paramount for testing, James has a different opinion:
When I became a tester, I stopped studying technology as a primary thing, I started studying errors, exceptions, how could people be fooled, how could I be fooled and therefore, how can I overcome being fooled, how can I get out of being fooled. Because, as a tester I'm the only one on the project whose job it is not to be fooled. Everyone else can be confused and think something is true when it isn't, but the tester is supposed to know when things aren't really the way they seem. It's difficult, but that's their job. It's a very different way of thinking that requires a different kind of studying, a different kind of focus if you are going to be very good at it.
James does not believe in automated testing. He thinks it has not been done yet, and it will probably never be done:
As a programmer, of course, I'm constantly thinking where to use these tools that I consider test automation would be important - by test automation I mean tool supporting testing. However, I want to point out to people on the audience here – no good manual test has ever been automated. Probably no good manual test ever will be automated and yet, people keep saying “I automated that test. I was doing that test by hand and now I've automated it.” - No, you haven't! Unless you only think like a machine, but humans are able to see many things that they will know -as soon as they see them - “Oh, that's not supposed to happen!” With the program you have to tell it it's not supposed to happen. Humans have expensive oracles, machines have very specific oracles and sometimes people say “Well, I programmed my test automation that if anything else happens than this one approved thing, then it will say there is an error.” Now you have the opposite problem. Now it's going to complain in a lot of these situations when it's not really a problem. Either way, you have a problem. Humans get to think while they are testing, humans get to wonder, humans have curiosity, humans can be distracted by something that seems odd, even though they can't put into words why it's so; a program must be specifically programmed. What I'm worried about is when tool vendors, who don't understand testing at all, sell tools to people who don't understand testing at all. They sell to people with big wallets and they say “Look at this incredible demo of things that appear to be tested”. It's all very simplistic, only focusing on certain kinds of testing than others, focuses on the kinds of tests that are easy to automate, systematically ignoring tests that their tools can't easily automate. Then the management buys these 100,000 $ tools and then they tell the testers “Now you have to use the tools.” Even if the testers don't even wait for the tool to even deal with this whole excitement of what we do in testing, then the management says “I just spent 100,000 $ on this, you're going to use it or else I'll look stupid!”. The tester goes “I know it is my job, so now I'm going to restrict testing only to what this stupid tool knows how to do.” When Agile programmers say “Our goal is 100% automation”, I go “Is your goal 100% ignorance too? Can't we learn enough about testing to know that automation doesn't do everything that we need to do to analyze products?” There are all kinds of wonderful things humans do and automation can only deal with some set of it. Why not instead we do this – why don't we instead say “Let's develop tools that can help us test?” And let's use tools wherever they might help us test. What's wrong with just saying that? Saying that everything must be automated is an empty motivation. There is no reason for that, but I could see above and beyond simply saying “Maybe tools will help us. Let's see where they can and meanwhile let's study testing, let's understand how to test so that we can have the imagination to imagine things that may be tools that we may have a really hard time doing, but we could do rather easily by hand.” In my classes, I demonstrate different kinds of testing that is hard to automate, but very easy to do by hand.
Talking about exploratory testing, James wants to correct the perception that exploratory testing is somewhat chaotic:
I get both people think exploratory testing is clothing optional testing, it is like hippies and moonlight, drumming and taking Ecstasy and do anything they want, everything is possible. It [a certain book] says in there that exploratory testing is almost impossible to manage, it's almost impossible to observe, it's almost impossible to control. You never know what happens. I can only think that whoever wrote that never watched a competent exploratory tester do their work, because it's like saying “Pairing team is impossible to manage, it's impossible to observe. You can't teach it to anyone! And management it's impossible to manage, it's impossible to observe. No chief executive could possibly be analyzed, no one could ever share their method of managing people to anyone else. Driving, no one can control driving and no one can say what they did when they were driving to work today, it's just impossible!” All of those are exploratory activities, and all of them are manageable, all of them are describable, all of them are study able, all of them are transformed skills. Exploratory testing is the same thing! Exploratory testing does not mean undocumented testing, it doesn't mean informal testing - those are the 2 big things people think it is. It can be undocumented, it might be informal, it could also be highly documented and super formalized. All exploratory testing needs it – that the tester does test design, test execution and learning as 3 activities in parallel, mutually supporting each other throughout the project. In other words, I don't do all my thinking on day 1 and then do all my test executing on day 2 and not mix these - they are all mixed together. A disciplined approach to exploratory testing is learning to do this while taking notes, while being able to explain what you are doing. I think what happens is people take incompetent exploratory testing as an example of exploratory testing, what, of course, you could do with anything. You could say somebody doesn't know how to write code and they mess it up and then you could say “Writing code is something that no one can learn and no one can teach. Look, that guy screwed up and I don't know what he did!” It's the same kind of thing with exploratory testing. I worked hard to make it a teachable discipline and that's what I do - I teach skilled disciplined exploratory testing.
The interview concludes with James’ opinion on testing certification. He is not against certification, but he is vehemently against self proclaimed certification organizations who pretend to know what testing is about without being elected by the community. He also believes in skill-based certification. It is not enough to fill up some forms and pass an exam in one day. It takes practice and experience to qualify as a good tester.