In the blog post shift left and shift right: the testing swing Laurent Py described how moving from a waterfall process to agile and later to DevOps has impacted the way that testing is done at Hiptest. At the Agile Testing Days 2015 he gave a talk about this.
InfoQ interviewed Laurent Py about why decided to transition to agile and DevOps and the benefits that they are getting from that, the "testing swing", how you can measure behavior change to find out if a feature is valuable, on the strategy and approach for test automation and what he expects that the future will bring us in testing.
InfoQ: In the blog post you explored how testing has changed while moving from waterfall to agile and now DevOps. Can you elaborate why you decided to make this transition?
Py: This is all about speed of feedback and the lean startup approach. 10 years ago my team was working on a product developed in Java and Eclipse. We were doing 2 releases a year. And the problem with this process was speed of feedback. During months you develop and create an asset. But since it is not yet in the hands of users the value of this asset is 0. From a business point of view it is hard to adapt and pivot quickly when you only have 2 opportunities a year.
So first we have adopted agile and have streamlined our process. From an engineering standpoint it was great because we had a working product every 2 weeks. But since it can not be deployed into production instantly we had the same problem with speed of feedback. Users don’t want to install a new release every two weeks.
Now the team is developing a new product in the cloud, a test management platform for agile teams (hiptest.net). Devs and Ops work together and we do continuous deployment. So not only we have a good engineering process but finally we can get feedback from users and react in real time. To me that’s the biggest benefits of agile and DevOps.
InfoQ: What are some of the benefits that you are getting from agile and DevOps?
Py: Developing a product is one thing. Getting it in the hand of real user is another. So the ability to get quick feedback really increase the team’s involvement. As a team member, you see the impact of what you do and you don’t have to wait months for this. This is really tough and requires a lot of discipline from an engineering standpoint but that’s clearly worth the effort. There is only one outcome I believe in: users and customers. Every thing in between does not mean success.
InfoQ: Can you explain what you mean with the "testing swing"?
Py: By testing swing I mean shift left and shift right. Previously testing was mostly done after development and prior to the production. Part of the testing activity has now shifted left: tests are designed prior to development. This is the Behavior Driven Development approach. It enables to align the team on the definition of what we should build. All the stake holders (tester, Devs, Product Owner, marketing) collaborate on:
- product acceptance criteria – the examples,
- business acceptance criteria – the assumptions we want to validate
Then you have the shift right: testing (A/B testing) and monitoring directly in production. As importantly, we also have a real-time user feedback (with live chat) to raise issues that might not be detected before. Sometimes, wrong behaviors might not be due to errors in the code, but simply to a bad UX or will only appear when a certain mass of data is reached. As we have quick feedback and ability to deploy continuously, we can react quickly in case of problem.
InfoQ: In the blog post you mentioned that you are measuring the change in behavior to find out if a new feature is valuable for the users. Can you give some examples showing how you have done this?
Py: We have added test refactoring features into Hiptest. This is a key differentiator and we could have just measured how many users actually use it. But measuring impact is more important. We have measured that people using this new feature have increased their level of automation. They also have increased their usage of Hiptest. So the question when developing a new feature and measuring the value is not just "are people using it?", it is about the impact in our business and in the users workflow.
InfoQ: What is your strategy and approach for test automation?
Py: Every test should tell a story about the application. When a test is defined using a consistent business terminology (BDD approach), it is both readable and easier to automate. This is the philosophy we do support with Hiptest. By the way we test Hiptest with Hiptest and automate 100% of our test scripts. Automation is partly done at the GUI level and also directly using the API. Of course this is integrated with the CI process. Once again speed of feedback is key for developers. When you commit your code you need to know quickly if you have broken something.
InfoQ: What do you expect that the future will bring us in testing?
Py: I hope testing will be more business oriented. Some people are afraid that QA disappears. I thing it’s the opposite … if you can adapt. What’s the point of investing too much effort in correctness and performance of a feature if it is not used or doesn’t bring significant value to the product ? This is where critical thinking of tester can help. Why do we build this ? What are the assumption and should we build it ? If the answer is yes then correctness makes sense.