BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Podcasts Democratizing AI at Thomson Reuters: Empowering Teams and Driving Innovation

Democratizing AI at Thomson Reuters: Empowering Teams and Driving Innovation

In this podcast Shane Hastie, Lead Editor for Culture & Methods spoke to Maria Apazoglou, Head of AI, BI & Data Platforms at Thomson Reuters, about as her experience in building great teams and democratizing the use of large language models across the organization.

Key Takeaways

  • Great teams are composed of a diversity of individuals with a strong sense of ownership over their work, supported by the right processes and structures.
  • To democratize AI within you need to provide tools, training, and community support to enable wide adoption of large language models.
  • Developers can use LLMs like GitHub Copilot to boost productivity, improve testing, and more easily debug and fix issues.
  • LLMs are tools to make work easier, not to replace humans, as people will always find more complex work to focus on.

Transcript

Shane Hastie: Good day, folks. This is Shane Hastie for the InfoQ Engineering Culture podcast. Today, I'm sitting down with Maria Apazoglou. Maria, did I get that right?

Maria Apazoglou: Yes, that was great.

Shane Hastie: Maria is in Switzerland. I'm in New Zealand, so we are at the opposite ends of the day. My normal starting point, who is Maria?

Introductions [00:53]

Maria Apazoglou: That's very interesting. So I am Greek, and I think I'm an engineer and a mathematician at heart. So if you ask "Who is Maria?", I think probably someone will say, "Maria is a mathematician, probably. And then Maria approaches everything with logic". But in terms of what I do, I work at Thomson Reuters, I'm part of the product engineering team within Thomson Reuters, and I am responsible for our AI platform, our BI platform, our content platform, our data platform, and our CoCounsel product. So a variety of different applications that support us to use data, use content, and obviously leverage AI either internally or with our customers. I didn't always start in product engineering. I initially was, interestingly enough, I was a quantitative analyst and developer. That was the beginning of my career.

So I started with C++, writing models for traders, and then I moved more and more into, I would say, managing teams and using platforms and creating products with these platforms, moving into Palantir and then moving into HSBC and getting roles that were basically more and more responsibility on creating applications for internal consumers.

And yes, it's been an exciting journey. Definitely the last few years have been extremely interesting with generative AI and what we can do across the ecosystem. So very happy to be at the place that I am at the moment.

Shane Hastie: Let's dig in first to some of the people in the teamwork. In your experience, what makes great teams?

What makes great teams? [02:35]

Maria Apazoglou: I would say it's, for me, one of the things that I'm always thinking when I'm hiring, and at this moment in time, I think I have roughly 500 people in my team directly or indirectly in some capacity. So it's really important to have a lot of great teams. I think what makes a great team for me is a mix of individuals, a diversity of individuals. I tend to say sometimes that I love a person that's very hacky and then I love a person that is very focused on stability, because they counterbalance each other and sometimes you need to go really fast, but then sometimes you need to have someone that focuses on how to make things better. And then it's really around, for me, the sense of ownership.

I love teams that each person within the team has a high sense of ownership of the work that they're doing. And I think for me, that sense of ownership drives great quality in the way that they will write their code, in the way that they'll create the application, in the way that they will care about the users of their code.

So it's probably one of the fundamental things that I look at for anyone that works with me. And then obviously there are other things which are the practices of what is a great team. So you can hire amazing individuals, you can hire individuals that super smart, have great sense of ownership, but if you don't equip them with the right structure, I think they're not empowered. And that's where, for me, there is what is the optimum size of the team, which is for me, typically around seven to maximum 10 people that will drive together towards a very defined goal and a defined deliverable that is cut into chunks. And then that they obviously use a lot of agile practices within the ways of working on planning, sprint planning, making sure everything is extracted in ADO boards.

There's others externally that I know are very popular. But basically making sure that the work is structured and everything is at the team's disposal from a process perspective and a support perspective to be effective. But definitely I would say, for me, it starts with individual and a great sense of ownership.

Shane Hastie: Now, when we were talking earlier, you made the point that you're working with teams, dealing with products that are very legacy and built in the 1990s and others that literally are up to the moment, large language model implementation. How do you keep both groups and everything in between engaged and aligned?

Keeping alignment across multiple teams with different focus areas [05:08]

Maria Apazoglou: I actually have a principle, and this is a great time of the year because that's where we're designing. We are creating our OKRs for next year. One of the major things for me is having principles on making sure that everyone within the team has some goal that they're driving towards. And that's the way that I will also express my goals for myself. I would never have a goal which will only focus on the latest. I wouldn't have a goal for myself or what I expressed for my team as a whole, which will be, drive AI. It'll be one of, but it won't be the only one. I will also have one, which is ensure we have reliability on our existing systems to support our content processing. Because I recognized that that's equally as important.

So the first thing for me is making sure that the team and the individuals that I have within my organization feel that they're driving towards something and they don't just see only one side of the org or of the world being promoted or advertised.

So when I communicate what we are releasing, I communicate it across, when I communicate our wins, I communicate across. And the win could be we set down a legacy application or the win could be we released a new LLM like now in our AI platform. I think that's a big part. So having that, I would say nice distribution of the work and the deliverables and communicate them in the equal way, I think makes teams feel more empowered and happy about what they're doing instead of always wanting to shift to something new.

And then there's obviously other stuff that we are doing, which is for the teams that are working on some of the more historical applications. What we tend to do is say, "Okay, I know we are working on X, but let's say instead of moving from, I don't know, Angular 1 to Angular 2, tell me how would you actually redo the application from scratch, and then how can I use AI to actually help you in that journey to redo it from scratch?"

So encouraging the teams that have been probably maintaining some of their more legacy applications to think about how do you shift them to newer and to using new technologies, which are very similar to what I use for the more modern stack.

Shane Hastie: You touched there on using AI to make the shift. I know that you've been doing a lot of work, as you put it, democratizing AI within Thomson Reuters. Do want to tell us a little bit about that?

Democratizing AI [07:35]

Maria Apazoglou: Yes, of course. So we started, I think, maybe a year and a half ago. I'm trying to remember exactly when it was. But obviously there was a big rise of generative AI. And the major question was, "Okay, how do we now within Thomson Reuters understand that AI is important and make sure that everyone within the company and within the organization has a goal to leverage AI and to understand AI?" Because it's not just about our products having a feature on AI, like our legal AI assistance and so on, but for instance, the sales team that is trying to promote and go to market with these products, they need to understand what AI is and what a better way than understanding what AI is than them using it on their own. But then how does a salesperson use AI? How does someone that is doing marketing use AI?

And that's where we are thinking, "Okay, what shift do we need to make?" And the shift that we made was to take one of the tools that was initially built by our TR Labs community, which is one of the teams here that is more focused on innovation, and make it enterprise-wide. So we started with, I think, maybe 400 users at that point in time that were primarily on the technical side that were basically had a few capabilities of asking, I think, GPT 3.5 at the time because it was... It feels that it was forever ago, but it really wasn't.

But anyway, they were using GPT 3.5 and then we took that on and then we started making it more enterprise wide and making it enterprise wide meant trying to put in place categories of how data is separated and how data is stored on creating reusable components, so when a new model would come out, it would be really easy to be integrated. And then extended that on and on up to the capabilities that it has now that you can effectively create one application with two to three clicks.

So there was a technology work that we did to get it to where it is today to enable the organization. But more than that, it was really the training that we did to the organization. One of the great things was that the organization had an initiative of having X amount of employees using AI. And then we started with trainings workshops. I'm not sure, I've never counted that, but I'm pretty sure we've done more than 50 trainings and workshops.

We tend to do at least one to two per week on training, like different teams of different sizes on how to use large language models, what are typical use cases that they can do. We then also created the community where we have users that are sharing with each other ideas. We call it idea space. So basically users can come in and say, "Hey, I used our tool..." It's called Open Arena. "I used Open Arena to do X to go and create a C++ guru", which was an amazing thing that I saw having started with C++.

Or there was another team that used Open Arena to understand how to write better QA testing. Another team that used Open Arena to create better emails and correspondence. And then continue to support the data scientists where they might be using it to connect to different databases and compare responses against a variety of different models across all providers. But really for us was the training and the documentation that we put forward and the certifications that we created, the workshops that we've done, including a pizza workshop, as we did, I think, a couple of days ago where we trained people on prompting with our newest tool and then we said, "Okay, we'll have pizza and prompting".

So yes, I think it really is around shifting the culture and providing the support throughout the journey and also promoting and letting the community speak for itself about what they've used, because there's nothing better than hearing a user talking about what they've done with the tool. And today we are 12 and a half thousand users from the 400 that we were roughly a year ago. And if we think about that and we think about the total number of employees that we have, we have more than 50% of our employees monthly accessing the tool, which is a fantastic, for me, achievement on what we've done.

Shane Hastie: I'd love to dig into a couple of things there. One, is in the development space, how are your teams and individuals using this large language model and what are the benefits they're getting?

Using the AI tools to accelerate development and testing [12:08]

Maria Apazoglou: So we have different tools for our development team. So some of our teams are using GitHub Copilot, and one of the benefits that they're saying is it's a lot faster to start writing code about a given feature. And one of the biggest feedback is how easy it is to write testing. And I'm hoping that the developers that are listening to us, I don't know if they were like me, but typically, developers hate writing testing. I've never seen them enjoy it so much. I don't know. I actually really liked it to be fair. But anyway, most developers will not focus on testing and I think with GitHub Copilot and tools like this, testing becomes extremely easy and extremely fast. So that's one of the things that we've seen quite a bit increasing. Like our testing coverage has increased quite a bit and the reliability of the code obviously, as a result of that.

We've also seen a lot of teams being able to resolve bugs much easier. So when there is a bug that from a given application, instead of going and searching through the code where it was, they basically have a prompt within the code and they say, "Okay, find X, Y, Z, what happened wrong". And then there's a suggestion then afterwards how to fix. So that's a lot about how we've seen them use and leverage GitHub Copilot or equivalent tools. And then the other things that we've seen is the developers leveraging other two, like Open Arena, which is a wide application of different large language models to do different activities.

So some of them, what they do is they might be replacing it more or less than what they would historically have been asking Stack Overflow. So we've seen some trends like that and predominantly I think they've also noticed that some models are a bit better than others. They typically might be looking at Claude or Llama, but they have the ability to compare between all of them to figure out which one is the best one for a given ask.

And then we obviously see a high number of usage of that tooling for development of AI. It's an amazing starting point. So we've seen many people use it as a starting point of, "Okay, if I was to do a very simple rack using OpenSearch and have a system prompt and select a large language model and get the response out". And then we have tools for helping them iterate over the system prompt and continuously evaluating, again using other LLMs or using human in the loop. We've also seen a lot of QA kind of work happening within that space. I remember when images were started to be supported, one of the initial use cases was taking screenshots of our applications and then asking the LLM to write QA testing. You know what I mean? Like basically the steps QA and manual testing for these applications. So we've seen that as well as use case.

So as you can see, a very wide variety, which I think is what's so amazing about the technology that it can give you really the space to be efficient in your workflow and in your work, no matter what you do, you're QA developer, you are writing something new or you're debugging something or you're a tester, et cetera.

Shane Hastie: The horror stories are that this will put us all out of work. Do you see that?

Humans are amazing at finding more and more complicated things to do [15:22]

Maria Apazoglou: No. I have so much I don't see it. What I see is, I think in my mind and the way that I see the world, first of all, I always believe that there's always a human involved. There's always a human involved in some capacity or another. There might be human involved evaluating an LLM. There might be human involved in writing the code for LLM. There are always some human involved. But what I actually believe is that we as humans are amazing at finding more and more complicated things to do. That's why I believe we won't. What will happen is more naturally, more and more work will become easier, but that will be the easy work is becoming easier, and then we're making more complicated work. And then what I think is happening over time is just the definition of complicated changes over time.

So something that was complicated, even without thinking LLM, something that was complicated even four or five years ago, it's not complicated now. Like, people would say that C++ was complicated 'cause you had to write classes and having... Actually, people would say that C was complicated because you had to manage your own memory and you had memory leaks and then, oh, C++ came out, happy days. So you know what I mean? I think what's happening is as technology evolves, things become simpler, but I think we find more complicated things to do and as such, I think, were always around.

Shane Hastie: So that was in the development space and some really clear use cases and value there. For the organization as a whole this 12 and a half thousand people using the LLM tools, what benefits are they getting and what is the organization seeing from that?

Organisation-wide benefits from AI tool adoption [17:06]

Maria Apazoglou: Yes, so really it's everyone's tool for embedding in their day-to-day workflow and activities and making them easier. I can tell you use cases that I do. So for example, if I have to write the job spec, it's much easier for me now to upload the job spec that I wrote in the past and say, "Okay, based on this job spec, can you write a new job spec but for this type of role?" So there's a lot of things that I think they are giving you efficiencies and benefits from using and that save you time from something that you were doing before. It's also a great starting point for a lot of strategy and that type of work that one has to do for a lot of presentations that one has to create. So I've used it for many presentations that I might do to give me some ideas that I then take and evolve.

We see it a lot with people using it to figure out how they can create summaries on content. They can find facts from a specific content or a specific document across the enterprise. So it could be that it's a legal document that we use for our products or it's some document that was given to us by terms and conditions on some new license that we might want to purchase. So there's many ways of basically leveraging this for giving you a starting point. And it really is always a starting point. It never is the end point. It's the starting point. We also see a lot of use cases on, I would say, assistance, like customer support type of assistance or equivalent.

So many cases where we have documents that are like our policies or user guides of different tools and capabilities. The technology is really, really good at having a user coming in and asking a question and then bringing back the answer and the relevant kind of instruction.

So this is one of the use cases that we've seen had tremendous amount of success and it improves the experience of actually everyone. It improves the experience of the support team, that before would have to answer the question, the experience of the end user that is asking the question to get a quicker response. I think it's one of the use cases that it's like a win-win for everyone, really. But yes, I would say I think I've seen the 3000 distinct use cases to be honest at this moment in time, of one flavor or another. But yes, it's kind of used for everything.

Shane Hastie: There's a lot we've covered there, Maria. If people want to continue the conversation, where would they find you?

Maria Apazoglou: They can find me at Thomson Reuters in Switzerland, but obviously they can find me in LinkedIn if they want to ask me a question about my work or about what we've done and how we approached AI or how we are approaching development as a whole. And obviously we are trying as a team as much as possible to talk about the work and publish our work. So we are creating a blog right now on our technical approach on how we've approached the large language models, our choices on multi-cloud and things like that, which I think are very interesting. There are a lot of people that they're starting their journey. They're asking the question, "Do I go with one provider? Do I go with many? How do I deal with security? How do I deal with making sure that I'm maintaining costs at arm's length?" So we are writing some articles around that, but definitely I'm on LinkedIn. They can always find me there.

Shane Hastie: Wonderful. Thank you so much. And we'll make sure we publish a link to that article as well. Maria, thank you so much for taking the time to talk to us today. It's been a pleasure.

Maria Apazoglou: Thank you so much. Thank you, Shane.

Mentioned:

About the Author

More about our podcasts

You can keep up-to-date with the podcasts via our RSS Feed, and they are available via SoundCloud, Apple Podcasts, Spotify, Overcast and YouTube. From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Previous podcasts

Rate this Article

Adoption
Style

BT