Making it easy for users to give feedback and automating the collection of feedback helps to get more feedback faster. Using artificial intelligence, you can analyze large amounts of feedback to get insights and visualize trends. Sharing this information widely supports taking action to enhance your product and solve issues that users are having.
Kilian Hughes, director of research & insights at Joyn GmbH, spoke about how they collect and use feedback to build user-focused products at Agile Testing Days 2020.
You have to make life easy for the users to ensure that you will get regular feedback from them about your live product, Hughes explained:
One of our guiding principles is to make it extremely easy for users to give feedback.
For example, the users of our video-streaming app only have to shake the app - an overlay appears - and they can already type in their feedback. Screenshots and log files are collected automatically and users don’t have to worry about that.
User feedback is automatically uploaded into caplena, an ai powered tool. The textual answers are coded for semantic analysis and then analyzed to get insights. Hughes mentioned that using an ai tool has been beneficial for them:
Using a tool for semantic text analysis helps us to deal with large numbers of user feedback, increase the quality of our work, and decrease the time we spend on this task.
To make insights actionable, they are shared on TV dashboards, in weekly insights newsletters, and by creating posters and putting them on the office walls.
Hughes mentioned that they have learned about their product by listening to users. It has helped them to find out what users love to prioritize future features, and to get a good understanding of bugs.
InfoQ interviewed Kilian Hughes about collecting feedback, analyzing large amounts of data, and making insights actionable.
InfoQ: How is user feedback collected?
Kilian Hughes: We get the feedback from the Appstore/Playstore, our Joyn website, and our mobile apps. For the website and app, we use dedicated feedback tools (Usersnap & Instabug) to collect the feedback. The data is then automatically uploaded into the database for further semantic analysis.
We manage to get around 600-800 user feedbacks per week, sometimes up to 1000, which is really a lot. Roughly 10% of feedback is not useful for us as it’s incomprehensible or the users comment on specific elements of the content they are watching. Apart from that, we get enough specific feedback to come up with meaningful insights, identify trends, and compare the data quantitatively.
InfoQ: What techniques and tools do you use to analyse feedback? Why these?
Hughes: For analysing the feedback, we use an ai powered tool called caplena. The user feedback is automatically uploaded into caplena from the app stores and our website/app.
As with the typical process of coding textual answers, we had to define the underlying codebook as a first step. This means we come up with categories like "feature" or "tech" and then define sub-categories like "multi-audio" or "chromecast support".
Once this was done, we started training the ai to do the correct coding. This is an automatic process within caplena. The ai applies codes to the dataset and we then revise it: we check if the assigned codes are correct and change them if needed.
After having checked a few codes, the ai then uses our feedback to re-code the answers that have not been reviewed by us. We then go over the next answers, check the quality again, and the ai then screens the codes it applied and changes them if necessary.
Nowadays, our work mainly consists of checking the quality of the coding, assigning other codes if necessary, and refining the codebook if new sub-categories emerge.
InfoQ: In what ways do you use the feedback to make insights actionable?
Hughes: For us, it is extremely important that our work does not end with delivering insights, but we want to see our insights leading to changes in the product. Therefore, it is crucial to raise awareness of the insights we identify. We do this in multiple ways: sharing our insights on TV dashboards throughout the company, sending out weekly insights newsletters, or creating posters and putting them on the office walls.
Of course, it is important to work closely with product and tech departments and share the relevant insights directly with the stakeholders so they can understand the issues and prioritize fixes within their sprints. One example: we got the user feedback that our live tv stream judders but could not reproduce the issue. We then looked deeper into this issue, looked at screenshots we also collect via the feedback tools, and found out that it happens only on certain devices when the users open the epg. We were then able to fix this bug and the users are happy now.
InfoQ: What have you learned?
Hughes: By listening to our users, we learn a lot about them and our product. We identify what users love about our product, how to prioritize future features, and of course, get a good understanding of bugs.
Regarding features, the first finding after launching Joyn was that users really wanted Chromecast support. That was therefore the first feature we developed for our users after launch, which was highly appreciated by our customers.
We also learn a lot about the way we collect and analyze the feedback. For example, we started out adding all feedback manually to a spreadsheet and then uploading this into caplena to do the coding. We then optimised this process so that the feedback flowed into caplena automatically. I strongly believe in starting quickly with a simple version and then optimising it, rather than building a full-fledged solution for the start.
Furthermore, we are currently experimenting with shorter slack-messages instead of sending out the weekly feedback mail - let’s see what user feedback we get from our colleagues.