BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations Generative AI and Organizational Resilience

Generative AI and Organizational Resilience

Bookmarks
41:36

Summary

Alex Cruikshank discusses where GenAI is likely to have the greatest impact, steps to manage this change, and ways to leverage the shift to AI mediated work to better understand business processes.

Bio

Alex Cruikshank has built software products for over 50 companies ranging from early-stage startups to household brands for over 25+ years. Now working in West Monroe’s AI Lab, he is leading R&D efforts and working with clients to unleash the potential of Generative AI for consumer software products and back-office systems.

About the conference

Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.

Transcript

Cruikshank: A long time ago, just a little bit after The Muppet Show left the air, I was a kid and I decided I wanted to learn how to program computers. I started building a few games. I started making a few small applications. As I was doing this, and learning how to program on my own, I came up with one simple rule that it's really helped me throughout my entire programming career. That is, computers are dumb. They're really dumb, they don't know what to do. You got to tell them what to do, and then they execute exactly what you say. Then you'd have to tell them precisely what to do next. The other thing that computers are really bad at is interacting with humans, which is sad, because that's actually what we usually want to use computers for. When you want a computer to interact with a human, you end up having to tell it exactly what to say in your program. Then when you want the human to respond, you have to narrow the options, so that your program is going to know what it's going to say. Then you still have to do a lot of work to get the input and capture it in the right way. The computer can interact with humans, but it's like a puppet, there's always a programmer behind it that's manipulating it, and making it do that interaction.

Conversational UI, and Generative AI (GenAI)

A few years later, about 10 years ago, maybe not quite that much, these conversational UIs came out. There's Alexa, and Siri, and Google Home. Now computers are actually pretty good at interacting with humans. You could speak to them, and they could speak back. If you've ever programmed a conversational UI, what's actually going on is it understands your words, and you get all the words, but you still have to put in all the sentence structures, basically all the different ways that they might say the thing you want. You have to put in all the synonyms that you might say. You have to anticipate everything. Of course, when it responds, it's still your program responding exactly the way you want it to. It's still that puppet that's in the background. You still have to program it. The programmers still manipulate it in the same way.

This year, things changed. Now that puppet, it's talking on its own. There's no programmer there, it's just a bunch of numbers. It's talking back, and it's interacting with humans really well. It's not entirely dumb. As a programmer, I find this amazing, and exciting, and a little terrifying. It's definitely going to change the way we think about writing software entirely. That's not all. When you think about it, think about all the communication that goes on in your business, think about like, how much of that communication is mediated by computers? We got Slack. We got email. This presentation is done on a computer. Programming, that's a form of communication. Almost all our communication is mediated by a computer some way. Now think about what percentage of your business processes rely on that form of communication. It's probably all of them. When you think about how all those processes now with generative AI can be improved, made more efficient, and augmented with generative AI, then you can start to get an idea of the transformation that we're facing over the next few years, the scale of it. I think we all should be amazed and excited and a little terrified.

Profile, and Outline

I'm Alex Cruikshank. I'm going to talk to you about generative AI and organizational resilience. I am a product engineer at West Monroe. For the last year, I've been working with AI Lab. The AI Lab, we spend a lot of time working with our clients to figure out solutions for them to use generative AI and how to do that. We also spend a lot of time figuring out how to build solutions for ourselves and how to make our own work at West Monroe more efficient. West Monroe, we're a consultancy, and we work with a lot of private equity firms. Those private equity firms have a lot of portfolio companies, and we end up talking to those portfolio companies, their CEOs, their CIOs, their CFOs, about what the future looks like with generative AI. What they're doing. How they're coming up with their solutions. How they're trying to stay resilient with this transformation that we all know is going to happen. I'm going to talk to you a little bit about what we talk about in those meetings.

The Threat

First, I'm going to talk about the threat. By threat, I mean the transformation. I'm going to put this stat up here. Up to 3.3% annual increase in global productivity due to generative AI, that's across the entire economy. This is what analysts and economists are predicting. There's a lot of stats that I could put up here to get the scale of the thing, they're all made up. This number is made up too, but it's not crazy. The reason why it's not crazy is because if you look at the 2000s, you'll see that economists believe that a number roughly like this, is the productivity that was gained by the internet, by all the startups and all the web companies coming into play. Basically, when you think about this number, this is like a small financial crisis and a dip in the housing market, and gas prices rising. You can have all those things and still have a good economy. If this productivity comes to pass because of generative AI, it's a really good thing for the economy. It's all coming from transformation in our business processes. It's all coming from efficiency. That's all this can be. At the end of the day, this is a mountain of change that we're looking at.

This is another analysis. This is from McKinsey, and they were looking at job automation. Basically, they're dividing job by education levels. At the bottom, you'd have no high school degree. At the top, you have master's or PhD. They're talking about how much automation there's going to be, with or without generative AI. The dark blue at the bottom, that's without generative AI, and the light blue at the top, that's with generative AI. You can see that there's going to be a lot of automation no matter what. With generative AI, suddenly, it's a great equalizer. Suddenly, all the jobs are going to get automated. Definitely, generative AI is more automating those higher skilled jobs, the ones that require more education. As we think about automation, some of the jobs we were not really worried about a couple years ago, are now looking a little shaky.

Also, I want to talk about the character of this transformation. You can't expect generative AI to just hit one department within the enterprise. Generative AI, it's going to affect all the communication within the enterprise. Any place there's communication, there's going to be opportunities to make it more efficient with generative AI, which means you're going to have lots of small initiatives across the entire enterprise. No one's exempt. It's going to be transformation by a thousand cuts. Of course, every industry is going to be a little different. We've been talking with manufacturing companies, retail, finance, insurance, healthcare, all these companies, they all have different challenges. They all have different things they want to solve with generative AI right now. They also have a lot of things that are the same too. The entertainment industry right now is in the news for AI, they definitely have a lot of big challenges with AI.

I want to talk about software, just because I think a lot of us are in software. It's definitely close to my heart. It also was where, potentially, changes in AI could happen a little bit sooner. At West Monroe, we did our own study using GitHub Copilot to see how much it would improve our developers' performance. It turns out, developers were getting a 22% gain in productivity. If you talk to Microsoft or GitHub, they're going to say 50%, but they're a little biased. We're going to stick with the 22% number, but still, that's a huge gain in productivity, that's like 5, 4 developers going free. If you think about the frontend development, generative AI is really good at talking to people. Computers are now really good at interacting with people. A lot of the work that we do when we build our frontends is to make that communication possible between humans and computers. We're spending a lot of time building UI. There's a question of, just do we need to do as much of that now that we have AI? It's not entirely clear, but it seems likely that some of those, we just don't need as much of that work, and that work is going to have to go elsewhere.

Same with design. I think, we're not seeing a lot of great solutions for doing UX design right now. We see a lot of people trying, and trying to make things happen. Design has really been automated for a long time. With design systems and component libraries, it's a lot easier to build a really good-looking, good working site without a designer. It's not hard to imagine in a year or so, that generative AI is going to be able to put together all the parts and build a really great experience without much effort, just by explaining what you want.

The Timeline

Now I want to talk about the timeline a little bit. Before I do, I want to start with this quote, because this is my favorite quote of all time. This is a William Gibson quote, "The future is already here, it's just unevenly distributed." What this is talking about is, any future that you can imagine, any technology future that you can imagine, it's probably true that the technology for it already exists somewhere, it just hasn't spread out, because we don't know how to use it well. The market conditions aren't right, or maybe people are just locked in. I like to think about this quote every time I get impatient for our future, that's flying cars and pet robots. Then I just think, those things actually do exist, they're just really expensive and scary right now.

No crystal ball for how the transition is going to play out, how long it's going to take. We can look at the past and get an idea from the way the internet grew. When you think about e-commerce, and streaming, and social media, these things are just part of our lives now. They've really changed the way we live our lives. That transformation has happened. We've been completely altered by that technology. It didn't happen overnight. It took a long time. The technology existed many years ago for the kinds of things we had for the most part. It took a while for people to get used to it, for people to figure out how to make it spread. We envision the same thing for AI. In fact, we can just take some of those curves and apply it to AI. We know things are underway right now, like I was talking about earlier with GitHub Copilot. Code generation is a thing that we can use right now, and delivers productivity. It's production ready. ChatGPT obviously is out there, and so people are using that to automate in the office, plus all these products are coming out that have AI added to them.

We're now looking at like, how do we automate support? How do we do knowledge management better? Products are coming up for that. There's projects that we're working on with our clients around that. That stuff's all happening. All these things are going to have their own adoption curve. They're all going to take a while before they really become transformative. Overall, generative AI is going to be the product of all those smaller transformations. I don't think there's going to be much of a transformation this year, even though there's so much hype. Everyone is just still figuring things out. Just here and there, people are starting to get a little disillusioned, so you know the hype curve is working. It's taking time, so really, 2 to 3 years probably before we see some real transformation in industries. Then, 10 years from now maybe we're going to look back at this time, pre-generative AI, and it's going to seem like a strange place. That's speculation.

Staying Resilient

How do we stay resilient in the face of all this change that probably is going to happen, especially when we don't really know what it looks like exactly? Always, if you want to stay resilient, you want to be flexible. I think with generative AI right now it's a balancing act. You can't just say no. You can't resist it. You can't ignore it, because eventually your competitors are going to overwhelm you. Your employees may leave. We're talking about your business processes. Your business processes that are probably working right now, and we can't just go through and upend and change all the business process to be AI optimized, because we don't know that they're going to work after we do that. We need to have a balance. We need to be gradual. Like I said, I think we have time. Here's the first idea. We should let people automate their own jobs. People know what their job is. They know what they need to do. They know the tedious parts of it, and what needs to be automated. The thing is, generative AI is the most democratic technology, at least since the spreadsheet. It's not hard to use. You just talk to it, and it talks back. It's possible for everyone to start automating. The only thing to do that, you have to let people actually use AI. Maybe for some of you, this is obvious that you use generative AI at work, on work information. Most companies are struggling with this idea, and for good reason. It's like you can't just ship sensitive information to OpenAI, without breaking compliance rules, or at least your security rules within the company. You have to find ways around that.

Unfortunately, there's a lot of good options right now. Amazon just partnered with Anthropic, which means all three of the major cloud platforms now have a generative AI solution associated with them. What that means is, if you are storing your sensitive, private data on the cloud, there's no real reason why you shouldn't be able to use AI, in that same cloud with that sensitive information. It's going into the same place and coming back from the same place so it should be ok. This is a really positive development, as far as I'm concerned. I was actually talking about this very subject to a bunch of CFOs a couple weeks ago. I said this, and there was a chief security officer that almost knocked me off the stage. Came to the computer and said, "We got to take this careful, you got to be slow. These hackers are using generative AI to do phishing attempts." I was like, are you worried that your employees are going to start hacking your company, and that they're going to use your corporate account to do it? There's a lot of fear out there. There's a lot of distrust. A lot of it's rational, and a lot of it's not. Just keep it in mind and just try to be sane about it. Hopefully, we can all get using AI a lot quicker.

Another idea, build AI literacy within the company. Like I said, generative AI, it's democratic. It's very easy to use, but just because it's easy to use doesn't mean everyone's good at it right when they start. There's a lot of things that you actually get better at as you start to use it. There's figuring out what kinds of tasks are good with AI, and what aren't. Figuring out how to word your prompt, so you get better results. Most importantly, how do you keep it from hallucinating? These are all things that you learn the more you work with it. If you give people access to it, they're going to figure it out. If you can provide training, not only are they going to get there faster, but you can also start to signal that it is ok to use it at work and maybe even encourage to use it at work.

This is a big one for me, you should collect and share your prompts. You can do it in a wiki. You can do it in your content management system. At West Monroe, we built an application that allows people to use our sanctioned AI, and also store prompts next to it and use those prompts. Even if you're just putting it in a shared spreadsheet, that's still great. You might want to consider having a role like a prompt librarian to curate those and highlight the ones that are working well, and maybe tweak some prompts so that they work a little bit better. Obviously, if you're sharing prompts, you are saving people time. They don't have to write the prompts themselves. You're also helping to build AI literacy because people can look at other people's prompts and see how they're doing things and figure out how to do it better. This is the thing I really love about it. When people are submitting their prompts, what they're really doing is documenting these obscure business processes that you may not know about, and that these people really want automated. That's just another way to build a little bit of resiliency.

The other thing, that survey that we did, developers gained a lot of productivity. The one thing that we really saw was, 100% of the developers loved it. They all said that they enjoyed working with it. A lot of them said, they would never go back to not working with it. Giving developers Copilot, or some other coding assistant is not just a productivity tool, it's also going to help you with retention, and morale.

Leaning In

Those were some basic, organic, let's just let the AI in and build our tolerance to it, get ready for transformations in the future. We're all in different places in our AI journey. Sometimes we want to lean in a little bit. Sometimes we need to have cost savings or somehow take more advantage of generative AI. We are working with a lot of clients that are doing that right now, and we are running into a lot of problems. I want to talk to you a little bit about how to avoid some of those. The main thing I want everyone to understand is that, you can't understand a person's job by understanding their job description. Every person brings a lot more value to their job than you would think, and they bring it in little ways. People are constantly working around problems. They're constantly sharing how they worked around those problems. They're innovating. They're also bringing empathy to their job. These are all things that AI just is not capable of doing right now. The AI can often do the job description, but often the job description is not enough to do the job, and you really need a human to do that.

An example of this, I was working on a chatbot a little while ago, it was for an insurance company, it was to give coverage advice. For esoteric reasons, when I was testing it, I had to enter in, "I think I'm pregnant." I typed in, "I think I'm pregnant," and it would always respond, "Congratulations, you're covered for blah, blah, blah." I'm like, I don't know, this is a congratulations moment. I went in, I tweaked the prompt a little bit. I said, "I think I'm pregnant." It said, "I'm sorry to hear that." I was like, that's worse. Trying to explain that context, you can't to an AI. It's like that subtle social situation is probably just a little bit beyond it. You got to be careful about those little things.

This is the approach that we would like people to take to automating jobs. We want to take a product approach. The first thing you do is you want to learn from the workers that are there, not what you think the workers are doing, but what are the workers actually doing. Then when you build, you don't want to automate, you want to augment. The nice thing is, anything you build to augment is going to be on the path to automation. You build tools. Give those tools to people, let them use those tools. Learn about how well it's going, and then learn how to build efficiency with those tools. As you're building in efficiency, it may be that you grow and don't need to hire, or maybe people leave, or whatever. Maybe you just end up with all the productivity gains you're looking for, without ever needing to fully automate, and you still have the human in the loop, which is nice. Maybe you do still need to automate. Then you just need to take that next step and go ahead and finish it by learning, where you need to do that.

The Uncanny Valley

I'm going to talk a little bit about the uncanny valley. The concept of the uncanny valley is from animation and robotics. It's the idea that if you have a humanoid character, as it gets more human-like, close to a human, it gets more relatable up to a point, and then as it gets close to a human but not quite believable, suddenly, it reverses and it becomes really creepy, like these mannequins. A similar thing happens with AI if you use a chatbot. If you say to someone that they're going to be talking to an AI, a lot of times they're just going to refuse, they want to talk to a human. If you don't tell them and put them in an experience that looks like they're talking to a human, they will quickly just determine that they're not talking to a human, and then they're going to get angry. These are the uncanny valley effects that we have to watch out for. If they get angry, that's really bad, that can damage your brand. It's definitely going to create a bad customer experience. You just don't want to do that.

The solution to the uncanny valley, is you just want to stay out of the valley. You don't ever want to get to humans. You want to pull back a little bit, make things a little bit cartoony. You can do the same thing with a chatbot. One of the easy things you can do is have it stop using I as its pronoun, and start having it use we. It's the voice of the company that is talking. You're not trying to suggest this as a human or an individual that's talking. Some other things you can do is, just don't make it look like a chat, just make it look a little bit more like a search and have it return results. Maybe they're summarizing process the way that a chatbot would. Then also have another prompt that will allow you to expand on your search. It ends up being effectively the same thing as a chat but it doesn't look like a chat, it's a cartoony version of a chat. Ideally, it keeps people from being as creeped out.

Conclusion

Generative AI is going to change everything. It's going to take years, though. We should prepare by making sure all our employees are AI literate. When we automate our jobs, we should make sure that we are thinking about the human element and really taking care of when we do that.

Questions and Answers

Participant 1: I've been thinking about this a lot, and two things I want you to talk about. One is, the different testing skills we're going to need to be able to write that whole thing, looking at the three pieces. Then the other one was, my observation is that, one thing you didn't say, which I wish you had got to [inaudible 00:27:41] now, is the approachability of this technology. A year ago, maybe, you had to be like this data scientist, but today, it's really approachable. I think change is going to accelerate. I don't know if you want to talk about the development?

Cruikshank: First thing is testing. I agree, testing is really hard with generative AI. It's non-deterministic. It can say different things every time. It's hard to evaluate what's a good answer versus not. It's pretty subjective. I actually have written a tool to test bench different responses from chatbots, so at least a human can compare them over time. You make a subtle change to a prompt, it allows you to evaluate it and judge it, and then maybe you can run all the responses again, and see if it works on all those. Still, it's not easy. I think it's only going to really be solved by having another AI evaluate the responses. I don't think we're really there yet. With the tools, they don't exist, but I know they're coming really quickly. As far as the approachability, I totally agree. I think it is the most approachable technology I've seen in a long time. It's interesting, because I think it's really not the technologists that are leading right now. It's like everyone has dove into generative AI, to ChatGPT, and they're figuring how to use it. I think all the programmers and product people are just trying to catch up and figure out how they can take advantage of it, too.

Participant 2: You've talked about automation, and I'm just curious if you have thoughts about dealing with our workforce asking or even talking about automation, and people feel like, "That's my job. What would I do next?" In the context of resilience in our organizations, what are your thoughts about what we do with that freed-up overhead? Then maybe related, you said this is exciting, but also freaks us out. I think, I've said the same thing. Is it for those reasons, or do you have other reasons for that?

Cruikshank: First of all, what to do with that overhead, I think is a big problem. It's one of the reasons why I'm saying we should be a little scared. We should be figuring this out. I think one of the things with building that AI literacy is essentially allowing people to retool a little bit, and maybe some of these jobs are going to go away, but they're going to be able to find new, maybe AI assisted jobs that are going to be better, and also more valuable. The thing is, people are going to want that. People are not going to want to do the jobs that are going to get automated away. The fact that they can be automated means they're tedious in the first place. Hopefully, people are looking at that. I do think that not everyone's going to take the steps that they need to, to stay ahead of it. There is going to be automation. It's going to happen, one way or another. You don't get those productivity gains without people getting essentially automated out of jobs. We know that when that happens, people end up at other jobs. Overall, it's a good thing for everyone. Everyone gets a little richer. It does cause a lot of stress when people lose their jobs. I think everyone should be trying to prepare for that.

I think that is why we should be a little afraid. I don't think we should be afraid of the AI. I'm not one of those people that think AI is going to take over the world or anything. Just like when you work with it, you realize that it doesn't have that sense of agency right now. Maybe something else happens in the future, and it's going to be scary, but it's not scary now. When I say it's frightening, it's just frightening because it just never seemed like this would happen. That computers would be capable of doing this thing. It's a little creepy, because, yes, this thing that seemed inanimate before is now animate. It's also a little creepy because, yes, it's going to disrupt a lot of lives, it's going to disrupt a lot of businesses. Change is always scary. It's not necessarily bad, it's just scary.

Participant 3: Have you ever recommended like comparing different GenAI tools like Bard and [inaudible 00:33:11].

Cruikshank: Is the question, what's the difference between these different models?

Participant 3: What is the comparison? If you have to build something or recommend something, should that be, GenAI models don't exceed the limits of your ethics and you don't have sarcastic language regarding the responses, [inaudible 00:33:55], copyrights, things like that, because they do it.

Cruikshank: You're talking about guardrails. It's nice that, yes, Anthropic, they've put a lot of fine-tuning into it to make sure that it will not do that. I haven't actually tried it to know how well it does with that. The best way to avoid the AI doing offensive things is just try to stay out of the realm of where things can be offensive. Don't ask it to be funny. That will almost always be offensive. Just ask it to respond to a question as succinctly as it can. Generally, I think all the big platforms do well with that. I certainly haven't seen any problems. If you're worried about it, you could certainly put a standard profanity filter on the end and could even use that as a guard, and have it try to regenerate the response. I don't think profanity is the issue. I don't think that's ever really going to be a problem. It's more of the subtle, offensive stuff that is going to be a little bit harder to track. I think one of the things you can do is put an intent filter on the front, so every time anyone asks for something from an LLM, you can have an LLM ask, is what they're asking appropriate? If it's not appropriate, you can steer them away from that question. If it is, you can have them go on. That's one way to deal with it. I think if you can just rely on Anthropic, or OpenAI, OpenAI is pretty good about it. It's not maybe perfect, but rely on them to have good responses, that's definitely going to be the easiest.

Participant 4: You're talking about AI replacing people's jobs, potentially. Is that a reality yet? Have you seen these logical yes respond if you ask a question? Have you actually seen that being done?

Cruikshank: I've seen it being tried. I've actually seen it being tried and failed. I think the thing is, there are some very low-skilled jobs, especially around support, and customer service, that are people interacting with customers, and they're really expensive, and people want to stop paying for that. They've definitely tried to do that with AI. It's just harder than it looks for a lot of reasons. Especially, if you're talking about support, people end up building a mental model of what they're talking about. Even if the AI has access to all the documentation, and all the information, it can't make the leaps that it needs to make to really make that happen. We were talking about just general customer service. You have to watch out about those little subtle cultural issues, the social factors that maybe the AI doesn't know, because companies can get in trouble there. A lot of times they don't care. They just need something cheaper, or they don't need it to be good. Then you run into the issue of maybe making your customers angry, because they don't want to talk to an AI.

Participant 5: I'd like to ask you about employee training. A lot of times these TA jobs are jobs that are entry level, and they're still learning the ropes. In order to be creative, they have to understand the domain first. In a world where we're automating the TA specialist job, how are we going to train people to be able to be creative and work with AI?

Cruikshank: I don't know if it takes that much creativity. I think it takes a little bit of practice. One of the nice things is the AIs actually provide a little bit of the creativity. You can ask the AI, what should I be doing with AI right now? It will give you a decent answer. I think it's really more a matter of getting the experience with it, and building some comfort with it, and putting in a little effort to figure out, how can you automate things, how to get used to it. One of the slides I showed was, it's not just the low skill, low education jobs that we have to worry about now. Everyone is talking right now about how paralegals are going to go away. It's like these AIs, they know law back and forth. That's mostly what a paralegal does, and so there's no reason to have them around. I heard that one day, and the very next day, I was talking to a paralegal and found out that 95% of a paralegal's job is making copies. That's something that ChatGPT is never going to do. There's tedious things that you do in your job that maybe the AI can't replace, and there's other jobs that people have to worry a little bit more, and those are actually maybe the higher skill jobs. It's really right now, to be ready for it, is just to understand, how much of your job can the AI do right now? If it's a lot, maybe you should think about another job. If it's not, then maybe you're safe for a while.

Participant 6: You mentioned a couple times that people don't want to talk to AI. Do you think that would change over time as people start to use things like ChatGPT, because that will change the mindset as well, and everything.

Cruikshank: Yes, absolutely. I think it's going to change. It's going to become normalized, and people are going to get used to it. You're right, lots of people are using ChatGPT, and it behaves like an individual. People are not getting creeped out about that. Those are early adopters. Early adopters are generally just more open to everything. When you're talking about your support customers, they're the people that really are unlikely to be happy when they find they're talking to an AI. I do think it's going to be fine in the end. You can also look at texting. We've been able to do voice texting for a long time, and you go around and you see a lot of people doing that. I'm just not one of them. A lot of people still want to type it, maybe because they don't want people to hear them or whatever. That's the future is unevenly distributed, kind of thing. Some people are going to be very used to it and some people are just never going to get used to the idea of talking to an AI.

 

See more presentations with transcripts

 

Recorded at:

Aug 02, 2024

BT