BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Podcasts Developer Upskilling and Generative AI with Hywel Carver and Suhail Patel

Developer Upskilling and Generative AI with Hywel Carver and Suhail Patel

In this episode, Nsikan Essien talks with Hywel Carver and Suhail Patel about developer upskilling and generative AI. Together they try to describe: the software engineer’s learning journey, the ways current generative AI technologies could help or hinder it, and what the role of the software engineer becomes with powerful AI technologies

Key Takeaways

  • Upskilling is about moving from the acquisition of knowledge, to the development of skill and ultimately to the nuanced judgement that comes from skill application in multiple contexts.
  • Current generative AI technologies can help with knowledge acquisition and provide a fast ramp to some level of skill acquisition. However, they are unable to help with the complexity of very open-ended problems, which are typically the most valuable section of the learning journey.
  • AI can enable a future where the abstractions of software engineers move from programming languages to natural languages.
  • Abstractions are powerful aids to solving problems, but for engineers, there will always be a need to go beyond them to solve very complex problems. This means effective use of AI in the future will still require strong foundational knowledge.

Transcript

Intro

Hello, it's Daniel Bryant here. Before we start today's podcast, I wanted to tell you about QCon London 2024, our flagship conference that takes place in the heart of London next April 8th to 10th. Learn about senior practitioners' experiences and explore their points of view on emerging trends and best practices across topics like software architectures, generative AI, platform engineering, observability, and the secure software supply chain. Discover what your peers have learned, explore the techniques they're using, and learn about the pitfalls to avoid. I'll be there hosting the platform engineering track. Learn more at qconlondon.com. I hope to see you there.

Hello and welcome to The InfoQ Podcast. I'm Nsikan Essien, and today I'll be joined by Hywel Carver and Suhail Patel as we discuss generative AI and its impact on the growth of software engineers. My guest today, Hywel is a CEO at Skiller Whale, an organization responsible for in-depth technical coaching of software engineering teams. And Suhail is a senior staff engineer at Monzo Bank where he leads and spearheads projects focused on developer platforms across Monzo. Welcome both to the show. It's an absolute pleasure to have you.

Hywel Carver: It's a real pleasure to be here.

Suhail Patel: Yes, absolute pleasure. Thank you so much for having us.

Early career learning journeys [01:17]

Nsikan Essien: Fantastic. So generative AI, it's a very hot topic today and I thought for this conversation it would be really good to hear some of your perspectives on how we think it affects software engineering development today, but more specifically on software engineers as they grow and advance in their roles. All the headlines about GitHub Copilot and ChatGPT and how it can aid in a range of tasks from code completion all the way through to larger scale app development. But prior to getting onto the meaty subject matter, I thought it would be great to hear each of your reflections on the start of your career and what the growth process was like in particular there. So fresh out of an institution of some sort, perhaps university or otherwise, it's the first engineering job. What was the learning process like then? Suhail it would be great if we could start with you there.

Suhail Patel: Yes, absolutely. So for me, I got into the world of engineering mostly through tinkering. So I used to run quite a few online communities and you'd typically pick up forum software like php and things like that. My first interaction was with web software, so the full-on LAMP stack and you'd pick up a piece of software like php typically at the time when you were deploying it, it'd be deemed quite insecure, so you'd have to go in and make a bunch of tweaks and make it secure. And then you'd want to add functionality for your specific guild or forum group, image generation and things like that. And a lot of that was trial and error, working with just a community of like-minded individuals to achieve a specific end goal. So programming wasn't the thing that you wanted to achieve. It was programming to achieve a goal, like something visual or something that you could see, something that you could experience, a button that you could click, an avatar that you could generate.

So being exposed to lots of different things like being able to work with HTML and CSS and JavaScript but also image generation and then you learn about caching and downtime and reliability and running servers and Bash and Linux and all the world of software and databases as well and the entire life cycle of software development by accident. And what I found is that I really enjoyed that particular process, just learning how technology worked behind the scenes. And I do have a computer science degree, but I think what that has provided is a broader range of topics to look into, things like artificial intelligence and neural nets and graphics processing, but it's not the only foundation into the world of engineering.

And Yes, for me, I am a tinkerer at heart, I use programming as a tool to solve for real life problems and that has carried me through the last decade of engineering where I've been working as a professional engineer right now focused on the world of backend software engineering and right now I work for a company called Monzo, which Nsikan just mentioned. It is a bank here in the UK, we have over eight and a half million customers and we effectively run on our side our developer platform. And our goal is that engineers can think about building really, really great products for customers and not have to worry about the complexities of running systems.

So imagine you could effectively fling your code over the fence and it would run assuming that it fit a certain type of code. That's the kind of platform that we aim to enable. And what we eventually want to achieve is that your speed of thought is the thing that is slowing you down. If you can write the code quickly, then you can get it into production quickly is the way we like to think about it. The constraint is the speed of your thought.

Nsikan Essien: Fantastic, brilliant introduction there. Before we move on to Hywel, I think a few things I'd love to hear a little bit more about. So you talked about trial and error a lot and in the field we know that's your primary feedback mechanism. So what was looking for help like then? Was it people, in person, online? What did that external feedback outside of your brain process look like?

Suhail Patel: To be honest, I don't think it's been static over the last decade. So initially I found digesting books to be quite helpful. I actually have a PHP 5 book, I have an entire bookshelf of O'Reilly and other associated publishers books, but I find that medium to not be resonating as well for me. For me nowadays I like to get hands-on. One of my favorite set of websites, for example, to learn a new programming language or something like that is the by Example website. So it's like Go by Example, Swift by Example where it is small snippets of code with a bunch of explanations and you get to run through the code effectively line by line, you sort learn by doing. And right now I sort of see stuff like Replit and YouTube that resonates extremely well as well. There's a lot of noise in those ecosystems, but there's a lot of really high value signals in those sessions as well.

And then even things like InfoQ and QCon, going to those events opens your eyes up to new technologies and different perspectives. So over there you get access to a community of people, people that you can reach out to. What I found throughout my career is that everyone is extremely friendly to have a chat. I message people on LinkedIn, it's like, I'd love to spend 30 minutes and if they're for example an expert in machine learning or LLMs or AI, how do I get started? What mistakes can I avoid? How can I lean on your expertise? And there's a two-way conversation as well that I love having those chats with individuals as well who are also getting started in the field. And what I found is that I have never ever gotten a no in having those discussions. People are so willing in the software community to give up their time and have those discussions and bring inspiration.

Nsikan Essien: Fantastic. No, thanks a lot for that. Hywel, if we could head over to you for a moment, so if you could think back a little, early days starting out, you chuckled when Suhail mentioned being a tinkerer. What was that learning and growth process at the very beginning? What was that like for you?

Hywel Carver: I think Suhail and I have that in common and I'm definitely a maker and I learn best by making, so you can't see this off camera, but over this way there's a pile of like, maybe 20 little strip boards where I've been building an eight bit computer for the last few years, which will eventually be able to play Pong as I've designed it. But we'll see if it ever achieves that. But the main thing is I now know how von Neumann architecture works and that's super exciting for me. I also knit, so I knitted a jumper. I don't know, I've started doing carpentry from time to time and I'm slowly getting better at that. Code for me started when I was nine. I knew that video games were written in this language called C. and so I wanted to learn C because I figured if I could learn C, then I could remake FIFA or something. It turns out to be a little bit harder than that.

Nsikan Essien: Very ambitious goals there.

Hywel Carver: You've got to start somewhere, haven't you? So I went to the sort of local discount bookshop in the nearest city and I got a copy of Sams Teach Yourself C in 21 Days for six pounds and I definitely spent more than six days, more than 21 days, sorry, reading it. But for me the eye-opening moment was being able to see how you could translate, I want the computer to do this, into then writing this code. And then when I was 13 or 14 I built my first website, a big fan of the Simpsons at that age and so I made a website which had a page for every one of my favorite characters. My school had an intranet. I feel like I might be the oldest one on this podcast, so that was a very exciting thing at the time. I still remember when our school got email accounts. They weren't there when I started school but they were there when I left and so I hosted my Simpsons site on the intranet.

But I had this navigation that was the same on every page, but there was no easy way to reuse the code because the school would only let me host static HTML and I think it was that that meant I ended up getting into, I think probably PHP was my first server-side language. And I was very lucky as well to have my older brother Tom, who was probably more into websites than I was and spent a lot of time looking at how things were built on the internet, just like right click and view source or whatever the equivalent was back then in a pre-Chrome era to see how the thing was built and then making pages with the most gaudy and awful marquee tags that you can imagine. Shout out to anyone else who used to write marquees.

Yes, then I ended up doing a degree in information engineering, which really was always about solving problems with code. Actually that was super fun because there was a bit of data structures and algorithms, but it was much more focused on machine learning, pattern processing. So because it was a very intersectional engineering degree, we would solve mechanical engineering problems with code. I remember at some point there was a simulator of aerodynamic flow over wing surfaces that worked in code. So for me the learning process has always involved lots of doing. So I have a shelf full of books. There are plenty of things that I've learned from books. What I find is that there's an amount of learning that happens when you're reading the book or watching the video or whatever and then when you come to put it into practice, there are all these kind of, I don't know, maybe not gotchas, but details, difficulties that aren't always highlighted in the books.

I think the books give you kind of introductory content and then when you try and solve real problems with it, that's when you're like, actually now I realize the things I missed or didn't understand or that the book kind of glossed over. So Yes, learning by doing, ideally with other humans. I think to how you touched on this, there's a model for learning called ICAP, which says that we get the best learning results by doing with human interactions. Interactive learning gets better results than constructive, which gets better results than active learning, which gets better results than passive learning. Does that answer your question, Nsikan?

The move from forums as learning spaces  [10:52]

Nsikan Essien: Fantastically so and there's so many threads in there. I think one that I'll start with that you both mentioned was about learning with humans in the loop and be that via conferences or reaching out directly to members in the community, it seems like that's played a pivotal role in this whole process. And arguably I think the biggest forum if you will, or collection of folks that most people in the profession would have used is Stack Overflow. You have a question, has somebody else struggled with this like me too? I think you'd speak to a lot of people who would say they've spent a good amount of time on Stack Overflow. In fact, there's so many memes about Stack Overflow and the modern engineer's laptop or keyboard should be a command C and a command V. That's how important it is to the community. And so I guess my next question is, in a world where generative technologies available over chat are available, is that necessarily the death of the forum? Does the forum become less important? There's lots of commentary on that. It would be great to hear your takes on that.

Suhail Patel: That's a really interesting question. Will it be the death of the humans in the loop or the forum? I think one perspective from it is that with most problems there isn't a right or wrong solution. Okay, maybe there's a solution that is suboptimal. There isn't always a direct right solution for every problem, especially when you get into the world of complex problems, there is something that you want to achieve, but there may be a time trade-off or a complexity trade-off or maybe using more resources gets the work done. At the end of the day, it was easy to implement or easy to understand at a higher level language rather than writing C or assembly. The advantage of having a human in the loop, whether it be via forum or within a company or an organization or what have you, is it brings out that perspective. You get other humans giving their views, their history, their judgement, their perspective from all the context that they have gleaned in the past.

And what I find really fascinating is that that context doesn't need to be a function of tenure. It doesn't matter how many years you've been in engineering. I have met phenomenal engineers who have come straight out of university or from a boot camp. This is their first foray into the world of engineering or professional software engineering specifically, and they come up with really interesting perspectives from their prior lives. For example, from the world of teaching, one of my colleagues used to be a maths teacher and brings really, really interesting perspectives on how things are explained and how things are formatted, even tone of voice, like variable names and things like that. How you abstract away code and how easy things are to understand.

We talk a lot within our profession about technical debt and maintenance. This all plays a perspective into that. And for example, you see this on Stack Overflow, even when humans are involved before even bringing in the world of AI, how you frame your question really changes the narrative on what kind of answer you're going to get. For simple questions like, what function would I use to be able to achieve this particular outcome? Usually you can converge on an answer relatively quickly because that is a very focused question, but once you get into any sort of realm of complexity, you can see there's lots of varied answers and typically people look at the accepted answer and then just look at that and move on because it has achieved what they want to be able to achieve. What I found really fascinating is to look at all of the alternative answers as well just to understand how we converged on the accepted answer, but then what other ways were available to achieve what I want to be able to achieve?

Nsikan Essien: No, that's a really good point there. Hywel, thoughts?

Hywel Carver: That's a super interesting question. When you asked it, my first thought was, gosh, when did I last use Stack Overflow to answer a question? And it was actually a really long time ago because now when I need to look things up, it's more often how does this language solve that problem more or what methods in this framework exist for that? I know the kind of thing I need to do and I might be able to write code that feels unidiomatic or weird. And so then I'd be like, well, how would a real Go developer write this? What Go's real approach to sorting in place. The method I've got returns a new copy, but that feels really inefficient throwing away the old one.

What is the thing that lets me do that in that language? And I think Suhail's right that when I used to use it more, there were the kind of closed questions that I would go to Stack Overflow for that were like what is the way to do this thing in this context? And then the open ones, I think Stack Overflow actually tries not to solve anyway. I know this because it inspired the podcast I host. You can close a question because it's considered primarily opinion-based on Stack Overflow and the aside here is that I like questions that are primarily context-based and that became the name of the podcast I host, I hope.

Nsikan Essien: You do host.

Hywel Carver: I do host, indeed. So Yes, I think in terms of how do I find the single right way to do this, I think Stack Overflow probably doesn't need to have that function anymore in a world where you've got things that can write the code for you. That said, current generations of AI can spout utter nonsense. I know the preferred term is hallucinating, but I don't think that is a fair use of the word and so that we still have a need for more reliable sources. But I think I have been going more for the official Go docs for quite a while now rather than looking on Stack Overflow. And that's partly just because I think search for kind of very specific niche terms has got kind of bad over the last decade.

I think search has generally got more Interpretative so that people who ask questions in a vague way get better results, but people who ask for specific terms of art are more likely to see something that wasn't quite what they were after. All of which is a rambling way of saying I haven't been using Stack Overflow for a while so I'm probably not the target audience, but I think the way I used to use it probably has got replaced a bit by artificial intelligences. And then those more open-ended questions I think it was always kind of bad at and I would rather discuss those with a human developer who I know I respect.

The learning journey: from knowledge to skills to wisdom [16:53]

Nsikan Essien: Okay, so that's a really useful framing. So more focused specific things might well be in the realm of the public forum, but then when you get to questions, Suhail, you had described those as things of substantial complexity, that's where you generally want more of a two-way conversation. And to touch on what you mentioned there, Hywel, there's an element of trust or almost a reputational aspect perhaps that's embedded. Is that a fair enough distillation?

Hywel Carver: Yes, so one of the distinctions I make in areas of learning is between knowledge, skills, and wisdom. So knowledge is like, which API calls exist? I have an array, what can I do with that array? What does each of the methods return? Skill is like the ability to actually do something. So writing Go is a skill, solving this problem in Go is a skill. Don't know why I'm picking on Go so much, other languages are available. Wisdom is like that contextual decision-making. It's looking at being aware of the company, the people around you, the people who are reporting up to the problem that we're solving, how quickly we need to ship, the way this is going to be used.

Are we ever going to call this thing with a million items in the array or is it only ever going to be 10, in which case I need to care less about a perfect solution to iterating through it five times. All of those things and that decision-making that comes with it are wisdom. I think that is something that Stack Overflow has always kind of steered away in the past and I think that's the kind of complexity, Suhail, I think you were talking about. When you are taking all of those things together and your experience in that, the wisdom I would call it that you have and making a sensible decision given the context.

Suhail Patel: I think, Hywel, you've hit the nail on the head there, you look at a lot of the tools that are coming out now, even framing it around AI, they're basically knowledge databases. They are not there to replace wisdom. You need to ask the right question to get the right answer and you've got to frame the question in a very specific way. So a lot of it would be augmenting for example, documentation or you look at IntelliSense, it has gotten smarter as a result of AI, but it is not going to write all of your code for you in the context that you want it to write it for you.

Nsikan Essien: There goes my wishful thinking.

Limitations and Strengths of current Large Language Models in the learning journey [19:04]

Hywel Carver: I think there is still so much need for context in making these decisions. So we've got to the point where if you say to an AI, I need a module which has this class and the class should have these methods, it can do a good first draft of that for you. Might not be perfect but it can write idiomatic code in whatever language you like, I'm not going to pick on Go this time and it will be reasonably close to correct. But right now we still need a smart human who can say, these are the modules I need, these are the classes I need, this is what the interface should look like unless you're doing something that's incredibly repeatable and boilerplate. If you need a list implementation or if you need a stack implementation, I'm sure you can rely on an artificial intelligence to produce the entire thing. One of my favorite examples of artificial intelligence kind of limitations, and sorry if this is a tangent.

Nsikan Essien: No, no, no. Tangents are good.

Hywel Carver: But I think it is so, so cool. If you ask, so background to this, I think everyone on this call is very aware that modern LLMs are essentially excellent pattern spotting machines. They are great pattern processors. As you said, they're a knowledge database. I think they're also coupled with a skill and that skill is being very good at spotting patterns. If you ask modern LLMs this question, they will get the answer wrong. Alice has three brothers. Each of Alice's brothers has two sisters, how many sisters does Alice have? And almost every human who hears that will make assumptions about the shape of the family and the way things work and the relationships between brothers and sisters would say, oh well Alice almost certainly has one sister. Each of the brothers has two, one of those is Alice, therefore there's one sister left.

And every LLM that was tested in the version of this I saw online says six because most of those questions look like Alice has three dogs, every dog has two bowls, how many dog bowls were in Alice's house? And the answer is six. And this is the difference between pattern processing with text and a thoughtful model of the world and understanding of it.

Nsikan Essien: And I think that's actually a really perfect example of a reproduction if you will. It's able to sort of pastiche and sort of transpose a similar pattern that's been spotted before and sort of adapt the current context to it and that's where it falls over. And where I think this is particularly interesting is in that sort of moving up the tree of skill that you described earlier of going from knowledge to skill to wisdom. And I think when you're in that early phase, when you are sort of lower down on that tree and you're gathering knowledge, in the world where there's a lot of generative tooling, how do we make it so that software engineer growth, you don't end up picking up those bad apples if you will, that are coming from this knowledge tree.

It would be great to have your thoughts on that because one of the articles I saw was there was a research paper actually and it was talking about how there are a number of vulnerabilities from code that's been reproduced by Copilot. Now you could have a conversation about well, likely the source text had those vulnerabilities in the first place so it's no worse. But then again, if that was the tool you were using to try to get you up the knowledge tree, sort of you are reproducing the problem. So curious to hear thoughts, moving up that knowledge tree, is generative tooling a good way to move up that knowledge tree? Does generative tooling only make sense when you're further up the knowledge tree and you can tell that something is a less good solution? Curious to hear thoughts. Suhail?

Suhail Patel: I think there's two ways of looking at it. I'll give you the first perspective and I was sort of thinking about this analogy before I joined this call. When I used to go to school, my teachers always used to tell me, especially my mathematics teachers, that you'll not have a calculator in your pocket all of the time and that was the excuse -earn your multiplication tables and fractions and things like that. And whilst that is now fundamentally untrue with smartphones and everything in your pocket, and some people do carry calculators themselves, however, being able to rattle off numbers off the top of your head or gauging order of complexity and I guess what we call here in the UK, the decision mathematics still helps a lot when you're having day-to-day conversations. I'm not going to go and pull out my calculator from my phone and do a sum.

We're going to sort of gauge the order of complexity as we're having a natural conversation and that's the nuance that was missed. Now the reason that I'm mentioning all of this is for example, I see a lot of engineers that are starting and want to be involved in the world of software engineering that are using these tools because they want to spend more time above the knowledge tree, but they're not spending time learning the foundations. I spoke a little bit earlier about the by Example websites that are quite common and popular. I would argue that a lot of that is gathering knowledge, gathering knowledge about the syntax, the functions that are available, how to do basic operations for loops, creating lists, stacks, queues, how to read files and things like that. Stuff that is pretty easy to digest once you know what to do. It is very much a syntax gathering exercise, but there is no replacement for actually going in and gathering that knowledge.

Now where generative AI can help is if you get stuck. If you get stuck, it is a fantastic resource to help be that explainer. So for example, let's say that you have come across a complex bit of code on a Stack Overflow or for example, you're looking at a library to achieve something, let's say that you're a machine learning engineer and you're deep into the guts of PyTorch or TensorFlow and you want to understand how something is working. Generative AI is fantastic for that. You can go in on Copilot, I've actually been using it myself. It's like I can go and read this code, I understand the syntax, I understand the language, but the amount of time it saves sort of understanding the code and the explanation that it gives, it's almost like going to another human and getting a fresh perspective and all it's doing is reading through the code line by line like I would.

For example, a mutex is taken here, this file is read over here, it's then processed over here and then the mutex is unlocked over here. This is the critical section. It's very, very good explaining what you would do as a human being, like a fresh pair of eyes, a rubber duck, so they call it. So that is one angle to this conversation, is if people want to upskill and get into the world of engineering. I don't think this is a replacement for that foundational knowledge, but there is another group of people that are being empowered by the world of generative AI where it's like I have a thing that I want to build and I don't care about the software that's used to power it because I want to create a store or shop front or something. I want to be able to sell my stuff online or I want to be able to tabulate this data.

There's actually a really fantastic online resource which is called Automate the Boring Stuff with Python, which is an absolutely fantastic book and the reason I really like it is because it does teach you Python fundamentals and things like that. But what it does, it takes real-world examples like for example, doing your calculations or automating spreadsheets or doing your taxes or getting something to move about or renaming photo files for example, stripping out all the IMG_2964 and putting in rich metadata like this photo was taken in Greece in October of 2023, extracting out metadata. And folks want to do that as a utility. They're not interested, they want to use code as a utility and again, generative AI can help you with the blank canvas problem. In order to get started, we are really saying, okay, you need to become an engineer, read the entire manual, learn every function, learn all of these things.

I think that is unreasonable and I think that is almost like a barrier to entry for a lot of these individuals where they could be really empowered and generative AI can help you with that blank sheet problem. It can give you a few bits and then you can iterate from there. So I think there's two groups of people. One where they are the people who want to go into the world of engineering where this is a tour that can help rubber duck. I don't think it's a replacement for learning that foundation. It doesn't automatically alleviate you, it doesn't raise the floor automatically, it helps you, it's a rubber duck. And there is another group which is being empowered to do tasks where for example they would have to go out to an engineer or hire an engineer or something like that for something that is quite menial.

Nsikan Essien: That's a really useful context. So it's a good way of moving up the tree by having a thing to bounce ideas off or to help you debug your own thoughts for want of a better term. Hywel, your thoughts on that?

Hywel Carver: I have so many thoughts and I agree with so much of what Suhail said and it's something I've also thought about a great deal in the past and given talks about. So I'm struggling to put my thoughts in coherent order, because I genuinely care so much about this question. So I think firstly we have to distinguish between knowledge and understanding, as this model Bloom's taxonomy of cognitive learning outcomes. Knowledge is the kind of baseline, it's like recall and remembering, understanding is the layer above that. There are four other layers over that. And you reminded me that Yann LeCun, one of the godfathers of AI tweeted the other day, "Knowledge is not understanding." You reminded me of an Einstein quote that, "Any fool can know, the point is to understand." And I think you are absolutely right, that knowing is something LLMs can help you with, understanding they might be able to help you with.

To build on Suhail's example, they can say the mutex is taken here and released there, but they can't say the mutex needs to be released on that line rather than the line earlier because this line needs to be in the critical section or whatever. I think we care a bit about knowledge. There is a baseline of knowledge that is absolutely critical like if you are having to look at the docs every time you access the zeroth element in an array, you're really going to struggle to write code at any serious speed. But if you need to look at the docs to remember how to set a HTTP header on a request, that feels more like, I mean, unless you're doing that all the time, unless you're writing a web server that feels more understandable that you have to look at the docs for that and not know it.

But I think in general, I think of the memory, the human brain is sort of just a local cache for Google when it comes to knowledge in software programming. We care more about skill and the ability to translate problems from the real world and space into lines of code that will solve those problems, which relies on knowledge, but requires more than knowledge alone. In terms of whether AI can replace all of that, I don't think it can replace the understanding part, at least not with the current generation of LLMs that we have.

In terms of whether it can help with learning, I've seen people use, create the products for themselves using LLMs like ChatGPT. They write a really thin interface over it where they can sort of say, I want to learn this thing and it will give them a summary that iwill let them dig into areas of that. It can explain things, it can potentially even ask questions of them. In terms of ability for doing, it's something we've experimented with a lot internally and it just isn't there. It doesn't cut the mustard when it comes to learning a skill. And in fact, while you were talking I asked the question to ChatGPT, when should I learn from ChatGPT and when should I learn from a human coach?

If people want to recreate this, this was GPT 3.5 turbo, because I didn't know how long I'd have to wait. So it's a long, long answer, but it basically says you should learn from ChatGPT for general knowledge, quick answers, self-paced learning, a cost-effective way to learn, or initial research. You should learn from a human coach, skill development. I did not prompt this. It's always pleasing when anyone agrees with you, even if it's a machine. Skill development when you need to acquire practical skills, blah, blah, blah. Personalized guidance, accountability, complex or nuanced topics, emotional support, I hadn't even thought of that one, and interaction and feedback as well. I think there's a future where AI can be a meaningful learning assistant for skills, I don't think it's here yet.

Barriers to effective generative AI use in technology organisations [30:55]

Nsikan Essien: And I feel like that touches on, Suhail, an idea you floated early on when you were talking about your introduction and some of the work you do at Monzo where you're really trying to enable your engineers to develop at the speed of thought. So it sounds like using generative AI as some sort of extension to expressing those thoughts in a way that some other computer is able to interpret and execute feels like the sort of acceleration that you're really trying to build for your team. Is that a fair approximation?

Suhail Patel: Yes, absolutely. I speak to a lot of companies that have sort of skipped a whole bunch of steps and gone straight into the hotness of generative AI, but I think there are more foundational steps that are proven in the industry as well. For example, I guess what might be considered quite boring, like automation, documentation, fostering understanding, learning from incidents. These are things that are tangible and have worked for decades for many, many institutions. For example, just giving access to GPT for your company isn't going to allow you to leapfrog. It is a tool in your arsenal and one of many tools, but I don't think, again, as we've been talking about, it is not going to be a replacement for all of these other activities. You need to invest your time and energy in those activities as well.

What I find quite telling actually as Hywel was reading out that explanation is there has been a lot that's been said around generating code, it's interesting that we've not seen a lot around reviewing code. We have a lot of stuff for example that is quite foundational around static analysis, rules that are written. Checks dependable is a good example of security vulnerabilities and patterns and things like that, but those are quite coarse grain for example. I think Hywel put it absolutely perfectly, LLMs are really, really good at pattern matching and that is the way that we've got them. But for example, as software engineers, a core part of our day-to-day work is peer review and reviewing code and making sure that we're not accruing technical debt. It's not gone into any of that realm of complexity.

The best it can do is probably the same baseline that you can get from a tool like Semgrep or any sort of other static analysis tool that can go out and scan your code base for effective patterns and that is the best that it can do. It can't go and replace that extension. Will this be good for the organisation? Does it scale to these number of elements? It just doesn't have that information and it will not be able to hallucinate that at all.

Nsikan Essien: That's a really useful distinction about current limitations of the tool. It's not able to help in that space of reviewing. I guess a hunch, Hywel, I'd like to hear about is do you think future versions of tools might venture in that direction of empowering reviews for example?

Hywel Carver: They might well try. I mentioned Bloom's taxonomy before as a sort of measure of outcomes of learning, which starts with knowing and understanding. The levels above that are applying. So actually being able to use a skill. Analyze, being able to break information apart in order to decide how to use a skill and then depending on which version you look at, evaluate and create the top one. So create might be using a skill with lots of other skills together and evaluate would be essentially reviewing. For software developers, it's like looking at the way someone solves something and deciding if it's been done well and thinking about how it's being performed and producing some kind of value judgment on it or giving feedback. And for human learning that is a higher form of skill. It's really unclear to me how an AI could achieve that without real understanding of what's happening.

Again, if you're doing something kind of very repeatable or kind of knotty programming, if you have a stack implementation and you're like, my second implementation is broken, there are enough stack implementations out there that I could imagine, an LLM will say, your problem is on line seven when you are not reducing the pointer in the array or whatever it is. It seems really hard to imagine to me that it could ever meaningfully review without understanding the full context of what's being worked on. And even then it needs the context not just of the code base, but of the company and the collaborators and the approach to security and scalability that a bank has to use, I'm assuming, Suhail, looks pretty different to the approach to scalability and security for a script you write for yourself to run in the background once a week on your own home computer.

I think that is a really interesting question about if there's a future where developers aren't really actually writing the code, we're still doing the structural decisions, we're working out what the classes should be in the interfaces, but we leave the writing of the code to LLMs. How do we effectively review it? How do we get good enough to go past apply on Bloom's taxonomy all the way up to evaluate? So we don't need to write code anymore, but we do need to be competent enough to review it. And the best answer I have is structured learning.

Again, it's one of the reasons why Skiller Whale, my company, exists because we believe a lot in structured learning as a way of preparing for the future of AI as well as just generally keeping up with the mad pace of technological change. I think we need ways of doing and understanding and having people solve those problems that are efficient and effective so that then when they see some code, they don't need to be writing code all the time in order to think, oh, there's a potential and escape SQL vulnerability here that we need to be really aware of.

Generative AI as a means of abstraction for developers

Nsikan Essien: I find that's a really, really interesting segue into what was going to be my next question is, if as a consequence of these sorts of technologies, sort of we all get to go up a level in terms of the abstraction that we work at, does that now mean that there's some stuff that's just not worth knowing anymore? Open question. So for example, if you have a generative AI thing that's been trained on every stack implementation under the sun, is there still a need for you to know how to write your own stack implementation? Is this sort of general question around how much abstraction is too much if there is even such a thing? Because if you've never learned how to write your own stack implementation, can you review a stack implementation that's been written? Should you care, assuming it's a "standard", in air quotes. Curious to hear thoughts on that. Suhail?

Suhail Patel: I think this has been a perennial question in the industry. I don't think LLMs have changed the perspective on this. Previously used to be about hardware. If you're not close to the hardware, then how could you know how it performs? If you don't know assembly, does that make you a competent engineer? I'm going to let you folks into a little secret. I've been working in the world of platforms probably for 15 years, I'm still terrible at writing Bash. I can't write it to save my life. I am terrible at writing it. I know what correct Bash looks like. I'm just terrible at writing it. When do you need a “then”? When do you need a “fi”? When do you need a “done”?

Nsikan Essien: When do you need a semicolon?

Suhail Patel: Do you need a semicolon? Yes, exactly. Does that make me a bad engineer? For me, I have sort of accepted in my fate that Bash is not a critical skill that I need in order to be effective. So I think inevitably there will be things that we will be able to not have to worry about if we come to a world where we are not writing this code on a daily basis. I don't think about assembly or what's going on in the compiler to which I write Go on a daily basis very similar to Hywel. And I don't think about what's going on behind the scenes on a daily basis. For example, if I'm going to debug a deep technical implementation or a race condition or what have you, those skills do then come into play and those skills might be rusty, but I know at least where to start.

I have tools in my arsenal. I can pull out Ftrace. I can pull out deep debugging tools that are in my arsenal. And I think that's what makes day-to-day work really, really effective is knowing that these tools exist and this tool landscape is always changing. So knowing that these tools exist and knowing what tools are going to be effective when. You don't need to be an absolute expert in these tools right from the get-go. But inevitably there will become a day where you need to go in and go and debug a complex problem. Maybe it's like a performance issue or like a race condition or vulnerability or something like that in a running system. And we even have to do this even in regulated industries. It's really, really important to piece together... There was a really fantastic tweet that I used as part of one of my talks.

For example, we have a microservices architecture. With a microservices architecture, what you have is a bit of a murder mystery every time an issue happens and you need to stitch together the different components of that murder mystery. Typically what you find is, I wish I had a log line here or I wish I had a metric here because it'd give me the definitive answer. So knowing which tools are available and how you can use them and how you can apply them, that bit will not go away. But we can abstract ourselves away from the day to day as we get to higher levels of abstraction. I think we've been doing that as an industry for many, many decades.

Nsikan Essien: And so in this case it sounds like all we're saying is sort of regular human language, if you will, becomes the language of abstraction rather than perhaps the programming languages we've been used to. Perhaps that is the next feature extension.

Suhail Patel: Yes, absolutely. For example, if you drive a car or you cycle, you don't learn about all of the mechanical components in the individuality and you sort of trust that those things have been thought about and you are using building, working on a higher level of abstraction when you use those tools. And I think the same analogy applies and the same analogy has applied for many decades.

Nsikan Essien: That's really, really fantastic. And I guess, Hywel, you wanted to chip in there a little bit?

Hywel Carver: I agree because of what I think is sometimes called Spolsky's law that all non-trivial abstractions to some degree are leaky. And so as a software developer, I have sometimes needed to go deeper down the stack of abstractions beneath the abstraction that I'm working on. The reason I started building the computer I talked about before is because Stripe used to host a capture the flag thing where you learned assembly while solving capture the flag problems. And I did that and I was like, I get assembly, but I still don't really understand this concept of micro instructions. I would like to understand that. And to some degree being able to sort of go down the stack and being able to understand in terms of hardware what is going on is sometimes useful. You can write in your high level language, but knowing how memory is going to be allocated and de-allocated and knowing what is going on in hardware sometimes turns out to be important.

And in fact, I went to a talk by a guy called Evan who works at OpenAI, which talked about their performance with GPT 4 and the next generation of NVIDIA graphics cards. And this talk went right across abstractions. From here is our code P99 of response times from GPT API interface right down to and here is how we think about marshalling kilobytes of data into the memory on the GPU as effectively as possible. Because to do really good engineering, I think you have to, you have to. As Suhail said, you have to be able to sometimes duck down a level or even two levels beneath the level that you're working on. And so all of which is to say if spoken natural languages become the language of programming, we're still going to sometimes have to know what's going on under the hood. To go back to your car analogy as well, Suhail.

Accelerating software development to the speed of thought with AI [42:04]

Nsikan Essien: Fantastic, thank you both. That's been really, really insightful to hear about. So in a nutshell, where have we arrived on with the initial question of generative AI and software engineering growth? It sounds like we've converged in a place where we're saying that actually in the first instance moving up the tree of skill, if you will, from sort of knowledge to skills to wisdom. That's still very much possible with generative AI technologies, they don't take that part away. They might allow us to care less about certain aspects of knowledge because ready answers are available, but actually the real complexities in understanding the context, which at this moment is what makes the role so complex and unique, but then still being able to dip into the lower levels beneath the abstraction behind these technologies is the ace in the hand that you might need from time to time essentially.

Hywel Carver: Which sounds a lot more fun to me. The boring bit of software is always the boilerplate and the sort of filling in the gaps once you've decided what the gaps are is kind of dull. But I think it can mean that it feels like all of the good bits of software and I think what Suhail was saying, the Monzo team aimed for that, being able to write code at the speed of thought. Maybe now we're able to write code almost faster than the speed we could think it and we can just think about what code should exist provided that we can get LLMs that write really good code for us with fewer hiccups. I collect examples of LLMs doing daft things and someone in my team found one where in a comment, "The American Constitution is..." And got the comment to be auto-completed via Copilot, so GPT in the background and that got auto-completed to "The American Constitution is not a valid JSON document."

I thought it was just... Did not see that one coming. No. So I think there's a future where LLMs get better. Well, I think LLMs definitely will get better and better and there might become a future where we can trust them to do a good first job of implementation. And then coding becomes just, well, I assume we'll have a nicer interface by then, and we just say, there should be five classes. The five classes should interact in this way, and they're in a module called this, go. And then you just sort of read it through and you go, great, GHPR, create, set some reviewers and move on.

Nsikan Essien: Fantastic. That's amazing. Thank you very, very much. Hywel, Suhail, it's been an absolute pleasure to have you on the podcast to discuss this really interesting topic. Look forward to our next conversation.

Hywel Carver: Thanks so much.

Suhail Patel: Thank you so much for hosting us.

Hywel Carver: Yes, it's been a real pleasure.

Suhail Patel: Absolutely. Thank you.

About the Authors

More about our podcasts

You can keep up-to-date with the podcasts via our RSS Feed, and they are available via SoundCloud, Apple Podcasts, Spotify, Overcast and YouTube. From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Previous podcasts

Rate this Article

Adoption
Style

BT