BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Podcasts Justin Sheehy on Being a Responsible Developer in the Age of AI Hype

Justin Sheehy on Being a Responsible Developer in the Age of AI Hype

At the recent InfoQ Dev Summit Boston, Justin Sheehy of Akamai delivered an insightful opening keynote on being a responsible developer in the age of artificial intelligence hype. The talk was aimed at software practitioners who might be feeling overwhelmed by the rapid developments and inflated expectations surrounding AI. We’re sharing Justin’s full talk in this special episode of the InfoQ Podcast. We hope you enjoy it.

You can watch the video presentation along with slides on the dedicated InfoQ page.

Key Takeaways

  • AI is Code, Not Magic: Large Language Models (LLMs) are sophisticated autocomplete programs that lack true understanding.
  • Beware of the AI Hype: The current AI landscape is filled with exaggerated claims, often promoting AGI as imminent, which is misleading.
  • Think Critically About AI Claims: Developers should approach AI claims with skepticism, demanding evidence and avoiding buzzwords.
  • Accountability in AI Development: Prioritize accountability when incorporating AI into software, including bias testing and ensuring real value.
  • Ethics in AI is Key: Responsible AI development involves adhering to legal and ethical guidelines, avoiding harm, and considering societal impact.

Transcript

Justin Sheehy: Before I can talk to you about being a responsible developer in the age of AI hype, I want to remind you, just briefly about something you already know. You are developers or software practitioners of some kind. That's what I mean, in an expansive sense, when I say that. I can speak with you here, because I'm a developer. I've written compilers, databases, web servers, kernel modules. I'm a developer too. I'm here with you. It isn't only developers we need to hear from, it's linguists, philosophers, psychologists, anthropologists, artists, ethicists.

I'm not any of those things, but I want us together to learn from them. This is because one of the biggest failings that those of us in software tend to have is thinking that we don't need all those other people. That we'll just solve it our way. This is almost always a bad idea.

Another thing about being a developer, and again, I'm going to use that term very loosely, if you're here today, you're who I mean. Another thing about being a developer that people sometimes forget, is that you have power. My friend and one of the best tech industry analysts out there, Steve O'Grady, wrote this about 10 years ago, and it hasn't become less true.

Your decisions matter. You have power. You know what Uncle Ben told Peter Parker comes with great power. We need to know how to make good responsible decisions, specifically, decisions that are in front of us today, because of some recent trends in AI. Because AI has made huge strides lately, some massively impressive work. Maybe the only thing more massively impressive, has been the scale of the hype around it. I don't think it's out of line to call this moment an age of AI hype. Some of you might disagree with me, and think it's all earned, and we're on our way to the singularity or something. We'll come back to all that hype.

How Does AI Work?

Before we do that, we need to figure out what we mean by the AI part, because that can be fuzzy. Artificial intelligence is an unhelpful term because it is so broad, but I'm going to use it anyway because most of you will. Don't forget that this is a computer program. You're developers. You know how this work. Don't let yourself get tricked into thinking there's something magical about a program. What if these things are programs, what kind of programs are they? I want to credit Julia Ferraioli, for reminding me of this very basic breakdown. Most of how AI has been built over the years has been in one of two categories, either all about logic and symbol processing, and so on. When I took AI classes in grad school about 30 years ago, this is where most of the focus was.

Or it's about statistics and mapping probability distributions of things seen in the past, into the future. All the recent attention has been on systems based in that probabilistic side. I'll focus on the part people are excited about now, LLMs, and their image synthesis cousins. I said these are just programs and you can understand them. I'm not going to do a thorough explanation of how they work. I want to take just a moment, just enough to have a concrete conversation.

One of the many advances that led to the current generation of auto-regressive, or AR-LLMs is this concept of the transformer, which allows the attention, you can think the statistical relationships between a word or a token and the other words around it to be much broader. This newer way of enabling richer dependencies has increased the quality of the output of these systems and allow their construction to be much more parallelized.

Even with that great advance, which really is great, and a lot of other recent advances, these language models are just more efficient, parallel, scalable versions of the same thing that came right before them. This is from Google's intro to LLMs developer pitch, "A language model aims to predict and generate plausible language." It predicts plausible next words, like autocomplete. Really, that is it. That's the whole thing. Don't take my word for it. You already saw it from Google's page. This is the definition of GPT-4 from OpenAI's own technical report.

Given a document, it predicts what word or token is most likely to come next, based on the data it was trained on. You keep repeating, you get more words, autocomplete. It is really good autocomplete. When you realize that that's exactly everything that an AR-LLM is doing, you realize some other things. It doesn't plan ahead. It doesn't really know anything. It is possible to make systems that plan or have knowledge, but it is not possible to make these systems do that, because it literally can't do anything else. That means it can't be told at all to not give false answers. Both Google and OpenAI say this. This isn't my opinion.

These are very cool pieces of software, using impressive architectures to do an extremely capable version of autocomplete. Nothing is there about knowledge or meaning, understanding, or certainly not consciousness, just, what's a plausible next word. OpenAI, in response to being sued in the EU for saying false things about people via its systems has shrugged and repeated that. They've been very clear in their legal replies that there is only one thing that their system does, predicting the next most likely words that might appear in response to each prompt.

They say that asking it to tell the truth is an area of active research. We know that's code for we have no idea how to do this. I'd like to set an expectation today, those kinds of systems, the AR-LLMs, like ChatGPT and the others, are the type of AI that I'll be referring to most of the time in this talk. It is possible that other AI will be created someday, for which some of this talk won't apply perfectly. It's not possible that these systems will just somehow magically change what they can do, which is part of what we're going to talk about.

The Age of AI Hype

It is worth emphasizing, though, how powerful these are at the things they're good at. I don't think I have to try very hard to convince you that these LLMs are amazingly cool and capable. That's why that part gets none of my time today. If they're so awesome, why would I call this an age of hype, instead of just an age of awesome AI? People are saying some pretty hyped-up things. I should give some credit to Melanie Mitchell for pulling some of these quotes together in a talk of hers that was really great, and that I really appreciated.

Those aren't things people are saying today. We've been here before, about 60 years ago. Leading scientists and journalists were just as convinced of those things then, as now. What's the difference between then and now? Mostly money. There are billions of dollars riding on bets like these. That doesn't make them more true, it just adds more incentive for the hype. We do see the same kinds of statements from very prominent people now. Please don't fall for them because these people are well known. This is all complete nonsense. I will come back to a couple of the examples. I want to help you not fall for nonsense, so that you can evaluate these kinds of really cool technology more reasonably, more usefully, and to make better decisions. Because making good decisions is fundamentally what being responsible is about. Decisions about what technology to use, and how to build whatever each of us builds next. To do that, we need to not be fooled by hype and by nonsense.

What does the hype really look like? How might someone be fooled? A big group of the current hype are these many claims that the type of LLMs we're now seeing, ChatGPT, PaLM, Llama, Claude, so on, are on a straightforward path to a general artificial intelligence, which roughly is the kind of AI that's in most science fiction, like human like, or real intelligence. This is complete nonsense. Here's a paper from Microsoft Research that tries to make that case. The paper suggests that LLMs like GPT-4 have sparks of general intelligence.

One of the most exciting things about this paper for me was this section on theory of mind. This is a big deal. If language models can really develop a sense of the beliefs of others, then something amazing has happened and the authors did something better than many out there, which was to look to the right discipline for answers, not just make it up themselves. They use the test about belief, knowledge, and intention from psychology, and GPT passed it. This is impressive. Except, it turned out that if you give the test just slightly differently, GPT fails the test. That's not at all how it works with humans. The LLM did what they're very good at, providing text that is very convincingly like the text you might expect next. This fooled the original authors. People who want to be fooled, often are. This is great if you go to a magic show, less great if you're trying to do science.

This article made an even more dramatic claim. No sparks here, no, it has arrived. This is an amazing claim. Amazing claims require amazing evidence. What was the evidence? There wasn't any. The article just claims it's here, and places all of the burden of proof on anyone wishing to deny that claim. Read it yourself if you don't believe me, but really, not the slightest shred of evidence was provided for this massive claim. That's not how science or even reasonable discussion works. Basically, they said, we can't define it or explain it or prove it, but we insist that it exists. If you disagree, you must have a devotion to human exceptionalism. No, I have a great deal of humility about what humans are capable of, including when it comes to creating intelligence.

I want to note that the very last paragraph of that article makes a great point, and one I agree with. There are more interesting and important questions we could ask, such as, who benefits from, and who's harmed by the things we build? How can we impact the answers to those questions? I deeply disagree with their core claim, but I completely agree with this direction of inquiry. Another statement I've heard a few times. I've heard some people not quite as starry eyed as the people making some of those other dramatic claims, but still a bit credulous, say things more like this. "The idea with this argument is that since the current AR-LLMs seem a lot more intelligent than the things we had before them, we just need to give them a lot more information, a lot more computing power, and they'll keep going, and they'll get there."

There's a bit of a problem with that argument. At its heart is a mistaken understanding of what LLMs are doing. They're not systems that think like a person. They're systems designed to synthesize text that looks like the text they were trained on. That's it. That is literally the whole thing. More data and more compute might get them closer to the top of that tree, but not ever to the moon. Another common claim, which is part of the hype, is that since LLMs produce responses that seem like they could be from a person, they pass a fundamental test for intelligence.

This is a misunderstanding of what the Turing test was in the first place. Alan Turing, who we all here owe a great debt to, actually called this test, The Imitation Game. Imitating human text is a very different thing than being generally intelligent. In fact, in the paper where he laid out that test, he even said that the idea of trying to answer a question about machines being intelligent was nonsense, so he decided to do something else. This is a wonderful paper, and more self-reflective than most current commentators. If we all had the humility of Alan Turing, the world would be a better place, and we'd be fooled less often.

One claim that I've heard many times and I find very confusing, is that people just do the same thing that LLMs like ChatGPT do, just probabilistically string along words. People making this claim are a lot like LLMs themselves, really good at sounding plausible, but when you look closer, there's no actual knowledge or understanding there at all. Stochastic parrot comes from the title of this very important paper from 2021. This was about possible risks in LLMs. That is a sensible topic for ethicists to write about. It's also the paper that got Google's ethical AI leaders fired for writing it. That's why one of the authors is listed as Shmargaret Shmitchell, since she wasn't allowed to put her name on it.

The notion that Bender and Gebru and McMillan-Major and Mitchell were talking about is a description of AR-LLMs. They're probabilistic repeating machines much like a parrot, that learns how to make the sounds of humans nearby but has no idea what they mean. Sam Altman, the head of OpenAI here makes the claim that this is all we are too. This is something many people have said, and it is a bizarre claim.

The problem at the heart of the claim is a fundamental misunderstanding of the fact that language is a tool used by humans, with its form being used to communicate meaning. You or I, when we have an idea, or a belief, or some knowledge, make use of language in order to communicate that meaning. LLMs like ChatGPT have no ideas or beliefs or knowledge. They just synthesize text without any intended meaning. This is completely unlike what you or I do. Whether it is like what Sam Altman does, I don't know.

Instead of listening to him, let's come back to Emily Bender for a moment. She was one of the authors of that stochastic parrot paper. She is a computational linguist at the University of Washington. She's extremely qualified to talk about computing and language. She and I have a similar mission right now. I want to help you be less credulous of nonsense claims, so that you can make more responsible choices.

Someone might see statements like this one, about current LLMs like ChatGPT, or Gemini, and so on, and dismiss them as coming from a skeptic like Professor Bender. To that, I'd first say that that person is showing their own bias, since she's so enormously qualified that the word expert would make more sense than skeptic. Also, she didn't say this.

This is a quote from Yann LeCun, the head of AI research at Meta, and a Turing Award winner, and an insider to the development of LLMs, if anyone is. He knows just as Professor Bender does, that language alone does not contain all of what is relevant or needed for human-like intelligence. That something trained only on form cannot somehow develop a sense of meaning. There is no face here, but you saw one. That effect is called face pareidolia, and most humans do it, and even read emotions that clearly do not really exist, into such images.

The face is a creation of your own mind. Similarly, when you read a bunch of text that is structurally very similar to what a person might write, it's very challenging not to believe that there's some intention, some meaning behind that text. It's so challenging, that even people who know how these systems work, and know that they have no intentions, no meaning, can be fooled. It is challenging, but you can rise to that challenge. Because, remember, these are just computer programs. There is no magic here. You know this.

There are some things that have made it easier to get this wrong. One of these things is the term hallucination. The use of that word about LLMs is a nasty trick played on all of us. When a person hallucinates, we mean that their sense of truth and meaning has become disconnected from their observed reality. AR-LLMs have no sense of truth and meaning, and no observed reality. They're always just doing the exact same thing, statistically predicting the next word. They're very good at this. There's no meaning, no intention, no sense of what's true or not. Depending on how you look at it, they're either always hallucinating, or they never are. Either way, using the word hallucination for the times we notice this, is just another case of the meaning all being on the observer's side.

Even if we accept the use of the word hallucination here, since it has entered common use, we shouldn't be tricked into thinking that it's just a thing that will go away as these systems mature. Producing text that is ungrounded in any external reality is simply what they do. It's not a bug in the program. I want to be very clear when I say that this behavior is not a bug. LLMs are never wrong, really. They can only be wrong if you think the thing they're trying to do is correctly answer your question. What they're actually doing is to produce text that is likely, based on the data before it, to look like it could come next. That's just not the same thing. They're doing their job and they're doing it very well.

Another source of confusion to be aware of, in addition to the hallucination misnomer, is this concept that arbitrary behavior can just emerge from an LLM. This idea is encouraged by all that sci-fi talk about AGI. It's really fun, but isn't connected to how these things actually work. Remember, they are just programs. Cool programs, but programs, not magic, and not evolving, just engineering. Then, we hear stories like this, about how Google's LLM learned Bangla, the Bengali language, without ever being trained on it. That's the kind of story that could make someone become a believer that these programs are something more. It turns out the important part of the headline was the phrase, doesn't understand, since it only took a day for someone who cared to look. Remember Margaret Mitchell, fired from Google for publishing about AI ethics.

For someone like that to find that the language in question just was in the training set. Huge claims should come with huge, clear evidence. This is key to how you cannot be fooled. When seeing a big claim, don't just dismiss it. Don't simply swallow it unless you get to see the evidence. Maybe, in the case of Google CEO believing nonsense, like the statement that their LLM can just learn languages it hasn't seen, we can just chalk it up to him not yet knowing as much as you do, and not requiring evidence before making such an outlandish claim. Then a few months later, Google released a really cool video on YouTube, it got over 3 million views right away. It showed amazing new behaviors, like identifying images in real time in spoken conversation with a user. It turned out that that was fake. Video editing isn't all that new.

Gemini is a really cool LLM. It can do some neat tricks. Let's not be tricked. Sometimes people get honestly fooled, and sometimes they're trying to fool you. You don't have to know which is which, you just have to look for proof in a form that can be verified by someone who doesn't have a stake in you being fooled.

People and AI (Human Behavior)

Switching briefly away from the narrow focus on LLMs out into the wider topic of people and AI. Some of you might have heard of the Mechanical Turk. I don't mean Amazon Mechanical Turk. I mean the machine that Amazon service is named after. This was a fantastic chess playing machine. One of the earliest popular examples of AI. It became widely known in the 1770s. This machine played against Benjamin Franklin and Napoleon. It won most of the games it played even against skilled human players. That's amazing history. We've had AI good enough to beat human players at chess for over 200 years. How do you think it worked? There was a human chess player inside. It's a great trick. It's great for a magic show.

This is actually a cool piece of AI history. Because this is actually how a lot of today's AI works. We see an AI system with an outrageous claim about its human-like capability. For example, here's a press release with nothing relevant left out from about 3 years ago. Through a groundbreaking set of developments, a combination of computer vision, sensor fusion, and deep learning, Amazon was able to make it so that you could just take what you wanted off the shelves and leave the store and then charge your Amazon account the right amount. I gave this away by showing you the Mechanical Turk first. Up until very recently, Amazon executives continued to call this technology magic in public, and their site now says that it's all about generative AI, and all of these amazing developments.

We just saw a much older magic show and they had the same trick. Another great example of how AI has moved forward is autonomous cars. Cruise owned by GM is just one of many companies racing to be ahead in this market. They're racing so fast that they've run over some toddlers and run into some working fire trucks. Anyway, this page on their website is pretty cool, and it's still there. You can see it. It leaves out some important things that surfaced around the time the California DMV suspended them for a lack of safety and for misrepresentation. Now you know what's coming.

Just like the Mechanical Turk in the AI checkout lanes, the driverless cars actually had an average of more than one driver per car. Some of them are just too ridiculous, like the Tesla bot. It was literally just a guy in a robot suit. These are not only cases of humans inside the AI, they're also hype. They're outrageous claims with the purpose of selling things to people who are fooled. They're also all good reminders that if something acts amazingly well, like a human, and you can't get adversarial proof of how it works, maybe don't start out by believing it isn't a human.

Even for things where that isn't literally the case, where there's no individual man behind the curtain responding to you, there are human elements of the AI systems, and now we are back to the LLMs and image generators, that we all get excited about today. For instance, one of the important breakthroughs that enabled ChatGPT to exist is called Reinforcement Learning through Human Feedback, or RLHF. This is all really well explained in a lot of places, including on OpenAI's page.

People read a lot of answers to a lot of initial prompts that people write, in order to produce their original training set, then more people interact with that to build a reward model. It's all about building a system that uses a lot of text written by people, so statistically produce more text that more people thinks like what a person would write. It is a real breakthrough to make it work. It's not just a program. It's just a program, or a bunch of programs, plus an enormous amount of low paid human labor. Just as the Amazon checkout lanes required multiple people per camera, or the Cruise cars had one and a half drivers per car, or the Tesla bot had an infinite ratio of humans to robots.

There is a difference here, these people aren't literally pretending to be AI. Still using ChatGPT or anything like it is using the labor of thousands of people paid a couple of dollars an hour to do work that no one here would do. That may or may not be worth it. That's a whole class of ethical problems. At the very least, I want you to be aware of how these things work, in order to make informed choices about how to use them, or how to build them.

Developers Make Things

These things are cool. I don't want to get left behind. This is where all the money is. My boss told me I have to put some AI on it, so that we can say AI in our next press release. How can I do it right? What kinds of things might we do? We're developers, we make software systems. Let's talk about some of the things we might do. We already know that these systems are just computers running programs. Some of you may be the developers of those actual AI systems. A much larger set of us are using those systems to build other systems, whether to build things like GitHub Copilot, or to put a chatbot on a website to talk to customers, and so on.

Then an even larger set of us are using those LLMs, or things that are wrappers around them directly for their output. Let's work our way down and talk about some of the choices we can make. As users, we're a lot like anybody else, but with one superpower of being a developer, and being able to know what a computer system like an AR-LLM can do or not do. How can we ensure that we use them wisely? If you wouldn't email a piece of your company's confidential information, like source code or business data to someone at a given other company, you should think twice before putting that same data into a service that other company runs, whether it's a chatbot, or otherwise.

Unless you have a contract with them, they may give that information on to someone else. Many large companies right now have policies against using systems like ChatGPT, or Copilot for any of their real work, not because they are anti-AI, but because they generally don't let you send your source code out to companies that don't have a responsibility to treat it properly.

The flip side of this is that since so many of the legal issues are still being worked out about property rights for works that have passed through an LLM blender, you may want to be cautious before putting code, text, images, or other content that came out from an LLM that you didn't train yourself into your products or systems. This isn't just about avoiding lawsuits. I've been involved in discovery and diligence for startup acquisitions on both sides. Everything goes a lot smoother if there are squeaky clean answers to where everything you have came from. This is not a hypothetical concern. There are hundreds of known cases of data leakage in both directions. Be careful sending anything you wouldn't make publicly available out to these systems, or using any of their output in anything you wish to own.

This particular concern goes away if you train your own model, but that's a lot more work. How should we use them? If you train the LLM yourself on content you know is ok to use, and you use it for tasks about language content, producing things for your own consumption and review, you're on pretty solid ground. Even if you only do one or two of those things, it puts you in a much better place.

For example, using them to help proofread your work to help you notice errors, can be great. I do this. Or, as Simon Willison does, using them to help you build summaries of your podcasts or blog posts that you will then edit. Or you can use them as a debate partner, an idea I got from Corey Quinn, having them write something you disagree with in order to jumpstart your own writing. Having yourself deeply in that cycle is key for all of those because there will be errors. You can also choose to use them because of how inevitable the errors are. I really love this study about using LLMs to help teach developers debugging skills. Since the key success metric of LLMs is creating plausible text and code as text, they're great at creating software that is buggy, but not in obvious ways.

I'm focusing on use cases here, where faulty output is either ok, I can ignore bad proofreading suggestions, or even the whole point, because you cannot count on anything else happening. When people forget this, and they send LLM generated content right out to others, they tend to get into trouble, because that text is so plausible, you can't count on noticing.

An LLM will cite academic work, because that makes it plausible, but the citations might not be real. Like this one that ChatGPT gave to a grad student doing research, and that publication just doesn't exist. Or legal precedents that didn't exist, but made for plausible enough text that a lawyer cited them to a judge. The lawyer asked ChatGPT if they were real cases, and it said they are real cases, and can be found in legal research databases, such as Westlaw and LexisNexis, that had some plausible text, but not connected to any facts.

This is even worse for code. Having an LLM write code that doesn't matter, because its whole purpose is to teach students debugging, is great. Having it suggest places where your code looks funny, as a not too skilled pair programmer, that can be fine too. Having it actually write the code you're going to ship is not what I would recommend. Was generating the text of your code ever the really hard part? Not for me, personally. The hard part of software is in communication and understanding and judgment. LLMs can't do that job for you. I've seen smart folks get tricked about this. That's why I'm here to help you. I get told often that these AR-LLMs are able to reason and to code, but they don't have any reasoning inside them.

I wanted to make sure I wasn't wrong about that. I tried the simplest question I could think of that it wouldn't literally have the answer to already memorized. I asked the latest version of ChatGPT, 4.0, to just count the number of times the letter e appears in my name, Justin Sheehy, 3 e's it says. That's not right. I've been told to just ask more questions, and then it'll figure it out. I tried a little prompt engineering. I did it the way people say you should, asking it to show its work and giving precise direction. As always, it sounds very confident. It backs up its answer by showing its work. That's not better. It's really not right. This is an extremely simple thing and should be in its sweet spot if it could do any reasoning or counting.

I tried similar questions with different content, just to make sure I wasn't tickling something weird, with similar results. This is just a reminder, these things don't reason. They don't sometimes get it wrong. They're always just probabilistically spewing out text that is shaped like something that might come next. Sometimes by chance, that happens to also be correct. Use them accordingly. They can be used well, but don't forget what they are and what they aren't.

What about when we're not just users? What about when we move down the stack and we build AI into our software as a component in a larger system. Here, it gets even more important to be careful of what content the model was trained on. If you train it yourself on only your own content, it's still just sparkling autocomplete, but at least you know what it's starting from. If you go the easy route, though, and you just use an API around one of the big models, they've been trained on the whole internet. Don't get me wrong, there's an enormous wealth of knowledge on the web, Wikipedia, and so on. There's also 4chan and the worst subs on Reddit, a whole lot of things you might not want your software to repeat.

Part of being responsible is not bringing the worst parts of what's out there through your system. When I say bias laundering, what I mean is that people tend to feel that an answer to a question that came from a computer via an algorithm is somehow objective or better. We're developers, we know all about garbage in, garbage out. If the whole internet is what goes in, we know what will come out. This isn't hypothetical. People are making these choices today, embedding use of these pretrained language models into systems making important decisions, and the results on race, gender, and religion are predictable. We can do better. How can we do better? We can start with testing. Just like we have testing tools for the rest of our software, we can test in all models for bias, like this pretty nice toolkit from IBM. That should be basic expectations. Just like writing tests for the other parts of your system should be expected. It's not enough, but it's a start.

Another set of irresponsible decisions being made right now can be seen walking around almost any conference that has vendor booths, and counting the things that have become AI powered. I understand the tendency and the pressure, but this is not harmless. That AI washing exercise, that money grab by saying, we'll solve it with AI, somehow, can mean that other systems maybe other ways to save lives don't get the resources they need. This isn't just failure, it's theft in the form of opportunity cost, preventing important work from happening. Or worse, you can give people confidence that an algorithm is on the job and cause real life or death decisions to be made wrongly.

Saying AI, might make it easier to sell something, but it might cause your users to make dangerously worse decisions. What can we do? We can talk with CEOs, product managers, and everyone else who cares about what they care about, the value that our software systems provide them. Instead of adding a couple of hyped buzzwords, we can figure out if and how adding some of these AI components will add real value. We can help those people learn to ask better questions than just, but does it have AI in it?

Accountability in the Age of AI

This and the rest of my advice here applies at multiple levels in the stack. Whether you're incorporating an LLM into your system as a component, or if you're actually doing your own model development. No matter which of those things you're doing, being a responsible developer requires accountability. That means that your company needs to understand that it is accountable for what it ships. You, if you develop it, you're accountable for them knowing that. What does this accountability look like?

You know now that an LLM simply cannot be forced to not hallucinate. If you put one in your app or on your website, you have to be prepared for taking the same accountability, as if you put those hallucinations on your site, or in your app directly. That cool AI chatbot that let your company hire a couple less support stuff might mean that the company loses more money than they saved when they have to give out discounted products or refunds that it offers, and that might not end up being the value that they hoped for, when you said you were going to add some AI. It's your responsibility to make sure that they know what the systems you develop can do. How can we do it? That part's pretty simple. We need to not lie. I don't just mean the intentional lies of fraudsters and scammers. You need to not make the hype problem worse.

It doesn't mean not using or making LLMs or other really cool AI systems, it just means telling the truth. It means not wildly overpromising what your systems can do. Microsoft had a Super Bowl commercial, where someone asked out loud of their AI system, "Write me code for my 3D open world game." That's just pure fantasy. That normally doesn't work with anything today. No one actually has any idea how to make it work. Microsoft has some really cool work they've done lately, and I should have represented it more responsibly. This isn't just my advice, it's the FTC's. This is advice from the U.S. government on some questions to ask yourself about your AI product. I think it's a pretty good start.

What else can you do? If you can't do something legally and safely, and I'm not talking about active political protest or anything like that, then don't do that thing. This is another one that sounds really obvious. Almost everyone will agree with it in general. Then I hear objections to specific cases of it. That sounds like, if we complied with the law, we wouldn't be able to provide this service. Or, if we took the time to make sure there was no child pornography in our training sets, we wouldn't have been able to make this fun image generator. We just have to violate the rights of hundreds of thousands of people to train a huge AR-LLM. Do you want to hold back the glorious future of AI? Of course, I don't want to hold back the future.

The future success of one particular product or company does not excuse such irresponsibility. All of those were real examples. A starting place for being a responsible developer is to develop systems legally and safely, not put the hype for your product ahead of the safety or rights of other people. It feels really weird to me that that's an interesting statement, to say that other people's safety or rights should matter to you. It feels like it should be obvious. I hope that to you, it is obvious. That if you have to lie or violate other people's safety to ship something, don't ship it. Do something else instead, or do your thing better. I am excited by a lot of the developments in the field of AI. I want that research to continue and to thrive. It's up to us to get there safely.

When I talk about what not to do, I'm not saying that we should stop this kind of work, just that we need to make each choice carefully along the way.

Alignment

I want to talk about alignment. There's this common idea that comes up in the circles, where people talk about building AGI or general intelligence, about alignment. The idea of making sure that AI shares our human values, instead of being like Skynet, or something. These are really well-meaning ideas. The problem they're focused on is still wild science fiction, since no one has any idea yet. How do you even start getting to AGI? We're multiple, huge breakthroughs away from it, if it is possible. That doesn't mean this work doesn't matter.

Bringing ethical frameworks into the development of AI or any other technology is worthwhile. This is an interesting paper from Anthropic, which is all about that topic of alignment, and general-purpose AI. Despite not agreeing with them about the trajectory towards general-purpose AI, I think that their framework is very interesting, and we can make use of it. The framework is really nice and simple and memorable. The premise is that an AI is aligned, if it is helpful, honest, and harmless, 3 H's. It's pretty much what it sounds like.

The AI will only do what is in human's best interests, will only convey accurate information, and will avoid doing things that harm people. This is great. I think these are excellent values. You can think of their research as being just as science fiction-y as the idea of general AI, but I think it's relevant today. You can make use of it right now, by leaving out the AI part of the definition and applying this framework to yourselves. If you can live up to the framework for aligned AI, then you have what it takes to be a responsible developer.

Make sure that what you build is helpful, that it has real value, and it isn't just hype chasing, but is a solution to a real problem. Make sure that you are honest about what you build. That you don't oversell or misrepresent it, or make the hype problem worse. Make sure that you are honest with others and with yourself about what you build and how it works. Make sure that you minimize the harm caused by the things you build or caused by building them. Pay attention to what it takes to make it really work and how it will be used, and who could be harmed by that. Ensure that you center those people's perspective and experience in your work. You need to help people, be honest with people, and minimize harm to people. Think as you make your decisions about those people, if you can do these things. I think they're pretty easy to remember.

Then you can exercise great responsibility. Remember that you have that great responsibility, because you, developers, have great power, perhaps more than you realize. You get to help decide what the future looks like. Let's make it a good one for people.

About the Author

More about our podcasts

You can keep up-to-date with the podcasts via our RSS Feed, and they are available via SoundCloud, Apple Podcasts, Spotify, Overcast and the Google Podcast. From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Previous podcasts

Rate this Article

Adoption
Style

BT