BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Using Artificial Intelligence for Analysis of Automated Testing Results

Using Artificial Intelligence for Analysis of Automated Testing Results

This item in japanese

Log in to listen to this article

Analysis of automated testing results is a very important and challenging part of testing activities. At any given moment we should be able to tell the state of our product according to the results of automated tests, Maroš Kutschy said at QA Challenge Accepted. He presented how artificial intelligence helps them save time spent on analysis, reduce human errors, and focus on new failures.

Kutschy mentioned that they faced challenges with the analysis of automated test results, and were looking for a way to make the analysis more effective and less prone to human errors:

If you have around 4000 test scenarios running each night and if around 5% of them are failing, you need to analyse around 200 failures each day.

They introduced the tool ReportPortal which uses artificial intelligence to analyse automated test results. This tool can be installed for free as an on-prem solution, as Kutschy explained:

I am the administrator, I did the proof of concept and the integration and I solved any issues. Colleague testers who work in feature teams are using it on a daily basis.

Testers log in to ReportPortal, find the results of the job for which they are responsible, and see how many failures are in "To Investigate" status, Kutschy said. Failures that existed on the previous day (which have been analyzed previously) are already categorized by ReportPortal. For failures in the "To Investigate" status, they need to do a standard round of analysis, which means debugging the tests and finding the root cause of the failure:

ReportPortal shows the output of the analysis; you can see how many scenarios failed because of product bugs, automation bugs, environment issues and how many failures are still in "To Investigate" status.

When you start using the tool, it knows nothing about the failures, Kutschy said. Testers need to decide if the failure is a product bug, automation bug, or environmental issue. The next time the same failure arrives in the system, then the correct status will be assigned to the failure according to previous decisions, using artificial intelligence.

Kutschy mentioned that the dashboards representing the results of the analysis provide a high-level view of the testing and the state of the application. The visibility of the state of analysis is in real-time, you see who is working on which failure. This helps to decide if it is possible to release the application or not.

With the tool they save time spent on analysis, as they look only at new failures, not at all of the failures, as Kutschy explained:

The difference is that if you have 100 failures today and only 2 of them are new, you only need to look at 2 failures. If you are not using the tool, you need to look at 100 failures, Kutschy mentioned.

There are also fewer human errors, as the tool does the triage of old failures for you based on previous decisions made by you. This helps to focus attention on new failures, Kutschy said.

Artificial intelligence will make the wrong decisions if humans train it with incorrect data, Kutschy said. If you are a bad teacher, your student (ReportPortal) will perform badly:

There were situations where one of the colleagues linked failure to an incorrect Jira ticket or assigned incorrect status to the failure.

You can "unlearn" by changing the decision manually, Kutschy mentioned.

If you use artificial intelligence correctly, it can save you a lot of time and will reduce human mistakes, Kutschy said. Once you verify that it is working correctly, you can rely on it instead of you and your colleagues having to triage the failures.

InfoQ Interviewed Maroš Kutschy about using artificial intelligence for the analysis of automated testing.

InfoQ: What challenges did you face along the way and how did you deal with them?

Maroš Kutschy: We started doing a proof of concept which confirmed that we could integrate the tool into our test automation framework.

The challenge then was to get colleagues to follow the new process of analysing test results using ReportPortal. Initially, they needed to categorize all existing failures, which meant assigning them the correct status (automation issue, product bug, environmental issue) and Jira ticket.

We ran a trial period for usage in selected teams and then all teams started using it. The feedback from the trial period was positive and the testers felt good about it, as it was helping them with the investigation.

InfoQ: What have you learned?

Kutschy: You have to verify that you can trust artificial intelligence before you start relying on it.

We had to be sure that ReportPortal was making the correct decisions. The decision depended on how we handled stack traces in our test automation framework, and the settings of ReportPortal. In the case where it did not work as expected, we tried to play with the settings of ReportPortal.

Most discussions are about using artificial intelligence to create test automation code, but we learned that the analysis of automated testing results is also a very suitable area. We can use artificial intelligence (including generative artificial intelligence) for many use cases in testing.

About the Author

BT