In this podcast Shane Hastie, Lead Editor for Culture & Methods spoke to Alex Cruikshank, the Director of Software Engineering at West Monroe.
Key Takeaways
- Generative AI will change the way we do things, but the fear of job loss is exaggerated.
- Developers should gain experience with AI and learn how to effectively use it in their projects.
- Front-end development will be most affected by AI, as it changes the way humans communicate with computers.
- Designers need to understand AI and its potential impact on user interfaces and conversational design.
- AI enables user interfaces designers to consider hybrid interfaces that combine AI and traditional controls
Subscribe on:
Transcript
Shane Hastie: Good day, folks. This is Shane Hastie for the InfoQ Engineering Culture podcast. Today I'm sitting down with Alex Cruikshank. Alex, well, I tell you what, I'll let Alex introduce himself. Alex, welcome. Thanks for taking the time to talk to us.
Introductions [00:34]
Alex Cruikshank: Yes, thanks, Shane. I'm Alex Cruikshank. I am a director of software engineering at West Monroe. I have been doing software engineering professionally for at least 25 years. I don't know. It goes back further than that depending on how you define it. I've worked most of the time in consulting, so I've worked with over 50 companies doing various types of projects. That's been software, small startups to large enterprise companies doing architecture everywhere in between. So I've been working with a lot of different technologies. And over the last year, I've been working almost exclusively with generative AI and figuring out how we can integrate that into projects.
Shane Hastie: And that's what's brought us here today. You did a talk at QCon San Francisco last year, which was really well received. Can you give us the very short overview of that? In the show notes, we'll link to the talk itself so people can watch that, but what were the highlights?
Organizational resilience and generative AI [21:30]
Alex Cruikshank: I talked about organizational resilience and generative AI, and so mostly my talk was about how generative AI is going to change things. I don't think anyone's surprised that there's going to be some changes with generative AI. It certainly has changed what we talked about a lot the last year, but I wanted to talk about it in terms of a technological revolution and how those typically play out, how it typically takes time for new technologies to actually get into the workforce to change the way we do things. But there's a lot of fear that comes along with that, that people are going to lose their jobs.
There's a lot of sometimes desire for that to happen more rapidly because it's like if we could just use AI to replace all these people today, we'd save a lot of money. But the reality is that there shouldn't be quite as much fear because those things take a lot of time. And the main thing is that the people that are there are actually almost always delivering a lot more value than it actually appears. They're doing a lot more than it says in their job description. It's just the innate ability of humans to interact with each other and figure out problems and make things happen, makes it really, really hard to automate and replace people with any kind of technology. And generative AI is no different, even if it is a little bit more like closer to the way humans think. That's the gist of it.
Shane Hastie: So given that summary, as a developer, where should I be looking to build my career path?
Where should a developer look to build their career path? [02:59]
Alex Cruikshank: Yes, that's a really interesting question right now. I mean, in the same way, that's a really interesting question. As an entrepreneur, what should I do with a startup because of generative AI is out there and it's like every single day there's a new opportunity and then that opportunity gets shut down the next week because some other company's already there or things like that.
The thing is, I firmly believe software engineering is going to be extremely important over the next several decades at least. I don't think that's going away. I do think it's going to change. The interesting thing is I think it's going to affect the front end engineering the most. And the reason why I think that is because when you get down to it, AI is really a different way for humans to communicate with computers. It's a much more effective way for humans to communicate with computers because it basically speaks our language or it is speaking our language. And that's a lot of what programmers have been doing over the years, is translating human thought, intent, what the desire is, and somehow into a way that a computer can understand it. Very structured data structures can be put into a database or acted on, and now we have a way to go from just an unstructured, I just want to do this, to structure using generative AI.
And so I think that's really going to change the way we do interfaces. And maybe not even that long. I mean, we've been playing around already with these hybrid interfaces where you have a big wizard or just a bunch of filters on a screen or something like that. It ends up being a complicated interface. And over the last, well, 20 years with web interfaces, we've been learning how to do this well and coming up with good patterns for that. But the thing is we put a lot of effort into building those complicated interfaces that can be buggy. There's a lot of things with AI. Maybe we don't need them quite so much. And so that's one of the reasons why I think front-end development is going to change.
But there are still going to need to be people that use AI to implement those interfaces. And it needs to be like people are going to have to wire the two things together and understand still how to get the AI to talk to the computer and what the computer needs to do and figure out what the solution is actually under the hood. So that's why I think there's always going to be that need, but I do think we're going to see a transition over the next really 10 to 20 years. I don't think it's going to happen immediately.
Shane Hastie: So good news from the point of view of you're not going to be out of work tomorrow? Good news from the point of view of a lot of the underlying skills of engineering still going to be there and learn to talk to the AI?
Underlying software engineering skills will remain necessary [05:39]
Alex Cruikshank: This is something I can't say enough is Yes, people should be working with AI as developers using the API. Just Open AI is the easiest right now to use and get used to how you send messages to it and send messages back. It's just an API. It's actually really easy to do to get started with, but the problem is, when you start playing with it more and trying to get more out of it, getting that consistency out of the AI is the real trick and figuring out how to prompt it in the right way, how to effectively get communication back and architect those systems where you're not just getting communication to and from the AI, you're actually effectively structuring a process. And putting in the guardrails, doing the intent gating, all of the other steps you need to do and understanding what those are, those are all things that I think people really should be getting experience with.
So at my company, West Monroe, we have a tool which started out as just like a ChatGPT clone that we could use in our company. It was using Azure, so it was a little bit better from a legal standpoint and people were just chatting with it, but it gave us a base where we could start building upon it and building new tools on top of AI. And then we started cycling as many people through, if people were not actively looking out a project, we'd bring them in and have them start working on this tool and start building out capabilities with it so that they could just get a little familiarity with what's going on. And now those people are all on AI projects. It's like because sales of AI has gone up quite a bit and we need people that are familiar with the technology to go on those projects, which that's been really rewarding to see.
Shane Hastie: Consistently, the team has been the unit of value delivery in software engineering. It's never the individual. How does the AI as a team player play out?
AI is a tool, not a member of the team [07:31]
Alex Cruikshank: I guess the thing is I don't see the AI as ever being part of the team. That maybe just because I haven't really thought about it enough, and it's definitely something to think about, but to me, the AI is still part of the machinery, really. It's still the thing that the team is acting on. I say that, but then with the coding tools, it's different. I mean, I use GitHub Copilot and it's accurately named. It does feel like a copilot. It's like sitting there coding with me. Sometimes I will write out a comment and then just take a ten-second break while it completes my thought for me, which is just great. And almost, well, a lot of times gets it right, maybe didn't always get it right all the way, but it really helps out a lot. So I think there is that, you can have the AI work with you as a developer. And I still feel like it's an augmentation and not just something that sits next to a programmer.
I think that's one of the things that I've done a little bit of a study within our group of people are using this and how effective they were when they were using it, what effect did it had on their programming performance. And everyone said that it improved their productivity, but what they said is it was like 30%. And then I ask the question again is, that's your programming productivity, how much has it improved your productivity overall? And they all dropped it down to 10% because we don't spend 100% of our time programming. We spend most of our time talking to people, understanding requirements and writing stories, like developing what we need to do, having architectural conversations with people, and the Copilot's not in any of that.
Some of that might be able to get a little help with AI too. So we might see it creeping in some other areas. Maybe we get a little closer to that 20, 30% performance improvement overall, but for right now, it's much lower. So it's part of the team, but it's a minor part of the team right now, I would say.
Shane Hastie: You said it gets it right most of the time. How do you know when it gets wrong?
Recognizing when the AI tool gets things wrong [09:33]
Alex Cruikshank: I've been programming for 25 years. It's like, did it write it the way I wanted it to write it or did it not? But the thing is, Yes, it's easier to look at wrong code and understand it's wrong and fix it a lot of times than it is to write it from scratch. So it's still quite helpful even in those circumstances.
Shane Hastie: And what if I was a novice? How would I know?
Alex Cruikshank: This is another really good question. I mean, for me, this is the million-dollar question, which is definitely AI helps novice programmers. They can be more productive. It is doing a lot of the work for them. And the question is, is it too much of a crutch? Are they learning? The thing is they're novice programmers, they should never be able to submit code without some kind of code review. So there's always someone more senior that's going to be monitoring it. And the thing is, that means if AI is making mistakes and they can't catch them, then hopefully someone else is. But they need to get to the point where they're that senior developer eventually. And the question is, is that developer giving the feedback to them or is it giving it to the AI at that point? Which it's, of course, going to ignore. So are they learning at the same rate?
I think it's a completely open question. It could be no, and it's bad, it's a bad habit, or it could be actually they're learning faster. It's a great way because they're watching the AI generate the code, they're seeing how it gets done, and usually in a good way that it may be just boosting their performance as learners. I guess hopeful that that's the case and we'll see.
Shane Hastie: What about designing software products? Not, well, using AI to help design and code and build the products, but products with AI built in? So now we are building a product, pick a domain, you've got a CRM. What do I add?
Designers need to get AI literacy [11:22]
Alex Cruikshank: Yes, it's another one of my favorite topics. I think all designers, probably even more than developers, need to get AI literacy. They need to get used to, what does it mean to have AI in the picture? Like I said earlier, I think AI is going to have a big impact on the way we think about user interfaces. But then there's the step beyond that. A lot of things can just be a chatbot. I'm not totally sure that's everything's going to be a chatbot, but things can be a chatbot or you can have a more natural conversation with it. And there becomes this notion of conversational design. It's like you're not just going to let the AI out of the box just have Open AI's brand essentially. You want it to have your voice. You want it to create the experience for the users that your programming interface has created in the past. So I think designers need to start thinking a little bit in terms of conversation, in terms of creating these experiences.
And I think we all need to be thinking about intermediates. Going all the way to chat for everything, people aren't going to want to do that, and it's not the right interface for everything. But then again, it's like there's some simple... If all I have to do is click the button, I'd rather just click the button rather than say, click the button now. But there's interfaces in between that can be much more complicated. And this is where we've done some experiments around hybrid interfaces where we have a common model of what the interface is representing. And that model is both controls and is controlled by the user interface, but is also fed into the prompts for a text input.
And then you can have a conversation and the AI will present you with a new model, a new version or part of it, and then you put that back into the model. And the two can play with each other so that if you change something over here, the AI knows about it. If you change something, the UI switches, and I have a lot of hope for these type of interfaces, at least as a bridge maybe between worlds where we were and where we're going to be.
Shane Hastie: So from that architecture and design perspective, again, the need to really understand, I want to say the potential of these new tools, which is not different to any other tool we've come across over the years.
Alex Cruikshank: No, absolutely not. It provides new opportunities and I think with that, there's going to be new expectations. And so you just have to understand both of those and just stay on top of it and make sure that you're delivering stay of the art. But I think the thing is, the nice thing about UI is we all have to use them. And so we're all constantly exposed to new ideas, and so it's not that hard to stay up to date with that.
Shane Hastie: This is the Engineering Culture podcast. What are the cultural impacts for teams of bringing in generative AI at both sides of this?
Culture impacts [14:16]
Alex Cruikshank: Well, initially, one of the things I'm seeing a little bit of and a little bit worried about is going to potentially create a separation of the AI haves and have-nots. There's people working on systems that aren't using AI, and then there are people that are revamping systems or building new systems that are using it. And so that's something to watch out for. And that's what I was saying. It's great to get people cycled through that. So everyone has those capabilities, and it's not like you have your old-fashioned developers and your new fashioned developers.
But as far as other concerns around culture, I don't know. I don't think teams... It's like building teams, making teams run while it is an art in itself. I don't see how that's going to change. I mean, I think we're still going to need to have effective engineering cultures no matter what we're doing. So maybe you can imagine ways that AI would be able to help inform that process or maybe we'd have new tools to help us communicate better, but I haven't seen those come out yet.
Exploring developer productivity [15:21]
One of the things I am interesting in is there's been a lot of talk and work and interest lately in metrics for development. And also, which is the nice thing that's come along with that is the understanding that it's just a hard thing to do. It's like DORA metrics work well, but they work in an operations' environment, have production code and then scale metrics work pretty well, but they're like, well, they work well, but they're like, they measure teams. When you're looking at a large subset of a developer population, they tend to work well, but if you try to take it down to an individual level, they don't.
And I'm not sure the goal should be measuring individual performance because teams, we really care about is your team. But I do think AI potentially has the ability to help with metrics and make better, more nuanced metrics on team performance. That's probably more traditional AI, not generative AI, although, well, the thing I've been interested in is there's a lot of unstructured data and the development process. It's like your stories and your code. I mean, you can analyze the code with generative AI. All those things you could potentially bring into the metrics process and get a better idea of the actual complexity of a story or even identify what was blocking this story or this story or this story and some of the what was going on behind the scenes to get a better sense of what was going on.
Shane Hastie: A lot of our audience are relatively new leaders, engineers promoted into a team leader position. There's a fair number of our target audience. You've got 25 years of doing this stuff. In general, what advice would you give them?
Advice for new leaders [17:01]
Alex Cruikshank: I think the first thing is you need to look at all your team members as contributors in one way or another. But they're not all necessarily the biggest contributors in terms of code, so that doesn't mean they're not big contributors to the team. And so you need to watch out for that because some people just make teams better, and sometimes they do that in addition to cranking out LI code and sometimes they don't. But that's making the team perform better. So Yes, you just always look out for the team dynamics and basically how people are getting along to make sure that that is optimal. And then you can start worrying about how is the team performing overall and is there some issue there that's keeping it from achieving its goals.
Shane Hastie: Well, Alex, lots of good points and interesting observations there. If people want to continue the conversation, where do they find you?
Alex Cruikshank: Best place is on LinkedIn. It's acruikshank on LinkedIn. Work at West Monroe. That's probably it these days. I'm pretty much off of social media.
Shane Hastie: Thank you so much for taking the time to talk to us.
Alex Cruikshank: Yes, Shane, it's been great talking to you.
Mentioned: