BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Podcasts Building Responsible AI Culture: Governance, Diversity, and the Future of Development

Building Responsible AI Culture: Governance, Diversity, and the Future of Development

In this podcast Shane Hastie, Lead Editor for Culture & Methods spoke to Inna Tokarev Sela, CEO of illumex, about implementing generative AI in development teams, emphasizing the critical need for robust governance across data, policy enforcement, and explainability layers. She also discusses how intentional workplace policies and female-oriented mentorship programs have helped achieve gender balance in tech, with her company maintaining over 50% women employees through flexible work arrangements and supportive cultural practices.

Key Takeaways

  • Generative AI can automate roughly 30% of mundane development tasks, leading to significant efficiency gains in software development teams.
  • Governance of AI systems needs to operate on three levels - data quality, automated policy enforcement, and explainability of outputs - to ensure responsible and trustworthy implementation.
  • Organizations need to propagate existing security policies (like Azure ID/LDAP) to AI systems rather than creating separate silos of governance.
  • Risk management has become crucial for AI implementations, particularly around accuracy of outputs, explainability, and compliance, especially in highly regulated industries.
  • Female-oriented mentorship networks remain important because communication dynamics and group interactions still present different challenges for women in tech.

Transcript

Shane Hastie: Good day, folks. This is Shane Hastie for the InfoQ Engineering Culture podcast. Today I'm sitting down with Inna Tokarev Sela across 10 time zones. Inna, welcome. Thank you very much for taking the time to talk to us today.

Inna Tokarev Sela: Hi, Shane. Thank you for inviting me.

Shane Hastie: My normal starting point with these conversations is, who's Inna?

Introductions [00:44]

Inna Tokarev Sela: Who is Inna? CEO and founder of Illumex. I started the company around four years ago, but I think the most interesting part professionally-wise is to speak about my passion for graphs, run graph technology. During my first degree in physics and computer science, I fell in love with graphs and my research thesis was around that. You might remember we didn't have all this nice software we use today. So we used Matlab. And what I did is actually simulating geometrical forms over graphs to basically implement operational research methods and make them more efficient.

And since then, I am passionate about the graphs of neural nets and the combinations. This is what I did in my thesis. Basically developing algorithms, which are based in combination of graphs and neural net systems, specifically for healthcare domain.

This was in the beginning of my career and then I started my long journey at SAP. And it was really exhilarating to see how much you can achieve with the access to so many customers. So I was part of SAP HANA cloud platform, a team under the office of CTO. And got lucky to work with companies such as Walmart, Pacific Builder, Lockheed Martin, to help them with the journey to big data and cloud initiatives.

And of course, building use cases and business cases and the strategy around that. This is my background. I'm a mother and we as a family, we live in Tel Aviv, very close to the beach. So I do enjoy first morning and evening walks, seeing sunset. This is indulgences that we have.

Shane Hastie: Tel Aviv is a lovely city. So some key elements of generative AI. And this is the Engineering Culture podcast. So we are not going to talk about the APIs for accessing the AI tools, Sirini and others, my colleagues deal with all of that. Let's start with the cultural impacts on teams. If we're bringing generative AI into the software engineering space, what is that impact for us?

Cultural impacts of bringing AI into teams [02:58]

Inna Tokarev Sela: Bringing generative AI to any team bears a lot of cultural change. So given that you cleared up all this privacy issues and show them your tool, it's important to understand how employees are going to use it. So education, "What's the art of the possible?", is imperative to any team. So in development team, we do use generative AI for testing, some code prototyping, but it's really understanding how you can accelerate the adoption.

I do believe in access acceleration, due to the fact that at least 30% of development are mundane tasks, which are also necessary. So not every backend developer loves to write tests, right? And this is, for sure, something which could be automated. We actually, from my experience, you see lots of excitement in our development team around using generative AI, despite the fact that as a company and especially in development, age is above medium for startups. So we are 35, 37 on average. I see a lot of excitement about this new technology.

Shane Hastie: You blithely passed over let's sort out the privacy and so forth. But, I know governance is an area that you are passionate about and concerned with. So it's not that easy to just get the privacy thing right, and other elements of governance. Is it?

Governance when bringing in AI [04:23]

Inna Tokarev Sela: Sure. Governance is a big topic and especially in data, data management analytics space. To me, every aspect of data usage and software development around that historically involves lots of guardrails. So we have governance for data, for analytics, we have different audits and standards, GGBR and SOC 2 and just to name few.

Right now for generative AI, we have lack of standardization on many aspects. So we do have those EU ACT initiatives and other legislation initiatives, which do play some high-level requirements. But. I see that right now the majority of enterprises actually decide on their own standards, separate and we do not have lots of standardization about that.

But in general, I do believe that governance right now is not embedded enough in generative AI practice as it is, due to the fact that generative AI models are black boxes. And when fine-tune when they customize them on your own organizational data or build workflows around them, usually data scientists are mainly focused on feeding those models with test data and understanding the outputs, consistency.

For example, approaches which is called RAG, to make it as customizable as possible to make the models to understand your data. But, they couldn't care less about governance. They do not understand the models enough even to customize them in majority of the cases, again, because as technology is a black box, they understand more and more about predictorization of this technology.

But, being able to plug in generative AI the same ways as we do plug in data management analytics into access controls, into governance mechanisms, which we already have in our organizations is imperative, I would say even more. So far as practitioners, we didn't really govern our interactions too much. So we did have secure access of the interactions like security control and malicious software detection and so on and so forth. With generative AI, we match extent also and governance of interactions.

What I mean all those proverbial $1 tickets and SUVs, we need to make sure that not only underlying data access is governed and data quality is governed, but also interaction themselves. Is someone trying to prompt engineer your generative AI, are they asking questions which access data, which they might have access to. But, the model which answers those questions is trained on the data they do not have access to, and so on so forth. So it goes way beyond.

Shane Hastie: So what does that start to look like in practical terms? So who is holding the governance roles? How do we define those rules even?

Inna Tokarev Sela: I do believe that the future of GRC and governance roles as they are today, mainly in setting the standards, setting the standards and monitoring that implementations cater to those standards ... I do believe that governance as a discipline, as everyday practice should be actually part of everyone's job and especially domain experts. So domain experts think about people in sales and marketing and customer support. When you do have your context and reasoning built for generative AI by data scientists, you might want to have workflows, which embed domain expertise.

So why data scientists would decide how churn is calculated? Why data scientists should decide how upsell is defined in sales, right? So domain experts, there are conflicts in data and there are always conflicts in data, especially when connect enough data sources. I do not think that they shouldn't be resolved on technical team sites. I do believe that domain experts should be involved more, especially because we expect those new generation of tools to provide self-service to data analytics and scale for domain experts. So we need to bring them as part of development as well.

Shane Hastie: It sounds to me, particularly in that transparency, that we're looking to create or bring in some level of observability in the models, in the AI tool, whether it's in the training or whether it's in the processing. How do we do that?

Inna Tokarev Sela: I believe that observability in generative AI implementation should go all three levels, should adjust three levels. The first one is the data layer. Right? So AI-oriented data. If you data is healthy enough, if you have single source of truth, if you do not have single source of truth, of course, speaking about semantic modules, which are probabilistic and you would not like to have randomness around that. So AI-oriented data for starters.

Second layer is the governance. What exactly the policies that you would like to automate on generative AI? Generative AI, of course it's about scale, so you cannot apply everything manually. You should build automated guard rails to enforce whatever policies you decided to enforce.

And the third one is this explainability layer, because if we speak about intelligence decision-making based on generative AI outputs, if it's for development, or for business users to make decisions in the daily practice, it's all about trusting the results. And results, when you have this answer 42, as a black box, you cannot really make decisions based on that.

So observability and transparency also goes to the explainability of generative AI outputs. How my promt was understood? Which data was it mapped to? What is the logic which was deduced from this prompt? Did this prompt goes through some certified semantics, which domain experts waived. So all of that should be in place. So three layers, data layer, governance layer and interaction layer with explainability.

Shane Hastie: I don't see many organizations doing that at the moment.

Inna Tokarev Sela: I think most of the organization aspire to do it at the moment. I see more and more risk management requirements in generative AI projects, especially in highly regulated industries. And it goes to, of course, risk management goals to the bias and privacy concerns. But, it also goes to liabilities. Liabilities to make decisions based on potentially wrong outputs. So this risk management on accuracy of outputs, explainability of outputs and consistency and compliance of outputs becomes a pinnacle of generative AI implementations.

The second consideration would be total cost of ownership, because it's very expensive to implement and customize over-the-shelf software for the enterprise needs. So this risk management to me becomes more and more established as a practice. And there are a plethora of solutions evolving in the space, from governance to cybersecurity to more offline tools for policy management and so on and so forth.

Shane Hastie: Dig into that policy management, because if I think from the stance of the technologist implementing this and bringing it in, they have got to provide the framing for these three layers to be in place. So let's start and you said give us policies. Well, what does that framework, the technical implementation of "apply policies" look like?

Technical implementation of rules & policies [12:35]

Inna Tokarev Sela: Yeah. So think about policies as the vehicle to address a few things. First of all, it's about quality of data. If data is representative, if it has standards of duplication, duplication levels and so on so forth. But also, if it's even enough, it's distributed enough. So for example, all the bias component. There are mechanisms to basically measure that and make sure that the data, which is fed into planting or even into training is even enough. So this is for starters. And this practices with LIME and other techniques already have been used for a while, so they exist. So this some data layer.

On a policies for the governance, think about them as you might want to propagate either your data source policies, or maybe your Azure ID or LDAP group policies that you have for your email use, your SharePoint use and all that. You want to propagate them automatically also for generative AI use and data, which is fed into generative AI.

So not have it as a silo and other silo create specifically for generative AI, but basically connecting it to our existing mechanism in organizations and interactions. Governance policies, they can go to, for example, the condition of patterns. This user coming from specific organization, let's say, customer success, and suddenly they try to access financial information, maybe on-purpose, may not.

So there are different guardrails, which go not only to access policies, but also to intent, right? To understand the context of user, the context of the problem, the context of the data. And this is where I started my introduction with graphs. This is where graphs complete generative AI semantic models, to provide more and more context with those implementations.

Shane Hastie: A lot there and a lot for us to think about in terms of, what does governance look like in the implementation of generative AI? I think some good advice and a whole lot of things that our listeners can dig into. If I can switch topics a tiny bit, or quite a bit, the gender imbalance in tech. I know that your company is over 50% women. That's pretty unusual.

Tackling the gender imbalance in tech [15:05]

Inna Tokarev Sela: It's pretty unusual and intentional. I believe the talent is there and the talent is looking for environment, which can support specific balances. So every demographics requires a setup, which is the most adequate, and COVID taught all of us that we can be more flexible in the work environment from remote standpoint, from the hours standpoint and other aspects as well.

I think majority of software development companies, especially the big ones, software giants, are coming back to five days office policy. And I think it's going to be discriminative to specific demographics on a scale. So in Illumex we've been happy and lucky to have this talent with us, but it does require specific setting to be facilitated, for sure.

Shane Hastie: So what are some of those policies? You mentioned flexibility and so forth. But, what are some of the concrete things that you've done at Illumex?

Concrete practices for flexibility [16:08]

Inna Tokarev Sela: It starts from the hiring process. Of course we only hire people who meet our standards. We do have a two days in office policy and those days could be flexible. We do support meetings, which are in core hours. It's 10:00 AM to 4:00 PM, which means the majority of the people, either it's fathers or the mothers can attend them without a disrupting the morning routine or the evening routine.

In general, we are flexible about sick days and out-of-office days for whatever reason. And this problem itself, because sometimes educational schedule could be disrupted and especially in this region it happens.

So of course, it's you need to have home care. On the side, it shouldn't be gender discriminative, because we want to have the same support for everyone in the company. And our kids ratio, I believe 2.3, among Illumex. We do have the dogs and we also have parents to dogs. So some people have dogs and cats. And to me we should support every needs and cater to the whatever flexibility everyone needs to. And also geographical. So geography wise, some employees might need full remote. Some employees commute could be longer than others and it should be taken into consideration as well.

Shane Hastie: Another thing I know that you are very passionate about and involved in is mentoring programs. How do we design and set up a good mentoring program?

Designing good mentoring programs [17:44]

Inna Tokarev Sela: I do believe in mentorship programs, which are specifically designed for female professionals. And this is due to the fact that communication is still different. So being heard or speak up does not come natural or does not come at ease, at just the same extent. And it's all ego management in a group dynamics also comes to place. So I do believe in female-oriented networks, despite there is lots of conversations against that. And it bears fruit. So we see more and more female founders.

You can look at data companies are pretty wild growth of those numbers, but we also see data leaders. So chief data officers, heads of data, heads of analytics. Especially in this discipline of data analytics and generative AI, we see more and more female talent and I'm super happy to see that. I think balance is everything.

Shane Hastie: What's the question I haven't asked you that you would really like to share with the audience?

Inna Tokarev Sela: I'm really passionate about what future bears with all this new advancements and all those new technologies. It could be scary for some, because the change is accelerating and change is here. To me, as an industry software development, as data management industry, we under this overload of maintenance, of technical depth, of testing of all, let's say, less creative tasks that we have on our plate every day. So we should embrace this innovation and use generative AI to augment our capacity.

And on a professional level, I'm passionate about application-free future. So who likes to go to 30 different interface over the day and have different tasks and different settings. So this is where I'm personally very passionate about and this is where illumex also helps companies to become closer to this future.

Shane Hastie: Well, a lot to think about there and a lot of good advice for our listeners. If people want to continue the conversation, where do they find you?

Inna Tokarev Sela: I am very active on LinkedIn. The social network has lots to offer, so please do connect to me on LinkedIn and I will be happy to continue the conversation there.

Shane Hastie: Thank you so much.

Inna Tokarev Sela: Of course. Thank you, Shane.

Mentioned:

About the Author

More about our podcasts

You can keep up-to-date with the podcasts via our RSS Feed, and they are available via SoundCloud, Apple Podcasts, Spotify, Overcast and YouTube. From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Previous podcasts

BT