[Note: please be advised that this transcript contains strong language]
Transcript
Schlossnagle: I'm here to talk to you about ethics in computing, which actually is talking to you about ethics before we talk about computing. You may be disappointed that there's not a lot of computing in this because it turns out ethics is a lot older than computing. I do like the term ethical debt. Everyone seems really familiar with the concept of technical debt, where you design an implementation designs that you realize might be suboptimal or not completely embrace the constraints of the problem at hand because we can get to that later. Right now we just need to get something out to market. The problem is that, with ethical considerations, we've done that by simply ignoring them. As computer scientists, typically we build things to see if we can. I don't know how many conversations I've had with people where there's some sort of concept, and the question is, "How can we solve this," always comes before, "Should we solve this?"
Are there any ethicists in the crowd? Anybody who is a professional ethicist, or has a degree in ethics, philosophy? That's great. I am also not one of those people. I'm not an ethicist. I talk to ethicists pretty frequently and I've given this presentation to a crowd not knowing there was an ethicist then, and I'm happy to report they came up and said, "I don't think you got everything right, but man, am I glad you're talking about this," so that was great. That was great feedback, made some modifications as well, but this talk has a lot more questions than answers.
What Ethics Is
The first thing that we need to do is actually understand what ethics is. As any proud Greek person will tell you, everything started there anyway. Ethics is a great word, comes from philosophy – Plato, Aristotle, Socrates. It is basically the way that we systematize, defend, and recommend concepts of right and wrong conflict. That is what ethics is. When someone says, "You shouldn't have done that," that is an ethical judgment. This is a framework. Ethics is a framework, not for telling you what's wrong and right. It's for systematizing, defending, and recommending what's wrong and right. The really important part about that is that it's a system for doing it. It's like a software development lifecycle for wrong and right in a lot of ways, except that I think it's much more formalized, because they've had philosophers, brilliant and obnoxious, argue about it for a millennia.
In Western philosophy, normative ethics are viewed through three different lenses typically, virtue ethics, deontology, which is duties-based ethics, duties and roles, and consequentialism, which is ethics that are valued by the consequences of actions, and not whether the action was virtuous or not. All of these things tend to end up in the same place, so calling one of them better than the other is a misnomer. It's really the lens through which we view what's wrong and right. People tend to argue the exact same way for what they think is wrong and right regardless.
As computer scientists, everyone always says, "I would like rules. I would like a code. I would like some sort of non-interpretative implementation of figuring out what's wrong and right. I would love deontology. I would like duties and rules. I just want to know that what I'm doing is ok." The problem with that is that our industry moves really fast and we're really young, so we have no idea what the outcomes of those things are. I personally think that it's a much healthier thing to look at this stuff because we don't know what the hell we're doing, to look at this from a consequentialist point of view, to really look at the outcomes of our actions and realize after making mistakes, that was a mistake. Maybe we shouldn't do that sort of thing again. Maybe we should preface this with a set of questions. Maybe we should change the way we pursue our goals to make sure that we don't make innocent mistakes up front that, for example, and most commonly, violate human rights on the outside.
Change Over Time
Another really important thing about ethics is that they only exist within the context of human society. That is all that ethics are. You can't have wrong and right without human beings, because human beings are the only ones that have the concept of wrong and right. This is only in the context of human society, and you guessed it, human society changes a lot. It has changed. I remember the first time I gave this presentation I was in London, and I got up on stage. I was talking, I was much angrier at that point because it was right after the VW emissions scandal and some other stuff, and I was, "What the hell is everyone doing? How can you make these decisions and think that they're ok?"
Women were property. That was not long ago. That is totally unacceptable in modern society, but that was true. I would argue maybe 200 years ago, but definitely 1,000 years ago, there was no question about that. The constructs of society at that point of what wrong and right were were entirely different than they are today, and they will be different in 50 years. I will tell you, if you follow along with ML and AI, they're going to be different in five years. What we're doing is going to have some very bad consequences, not doom and gloom, doom and gloom. Bad things are going to happen and we're going to realize that we need different ethical constructs to look at these things through.
Medicine, the only thing that's really different, the Hippocratic Oath actually really stood up well over time, and that's a very old oath, but it's not actually the first way that ethics were expressed.
Applying Ethics
The first real institutionalization of ethics was in the clergy. One of the first professional trades, because the word profession comes from profess and professing they're guilty to God. They were some of the first professionals, and with professionals comes a code of professional conduct which is professional ethics. It turns out professional ethics are there to give people without power trust in people with power. Someone has got a direct phone to God, they've got a lot of control over you as a peasant. You actually have to understand what their intentions are.
Now you can cite a bazillion times in history where those intentions weren't well thought out, or not executed on, but there was a context for them. It was a context for what that person professed to bring to you. That clearly went directly into medicine. The Hippocratic Oath was that way. It didn't matter if you were a bad person, or a good person, or that person hated you, they had one job to do when they were attending you as a physician, and that job was to do no harm. Couldn't hurt you. They could refuse you service but they would not do service and assassinate you or poison you. That was off limits. It's a really important construct in medicine, and it sets aside human condition. That's really interesting. We don't have anything that's quite like that in computing, and there's a good reason for that.
This also went into business ethics. If you look at some examples of business ethics that are bad, you can follow the lawsuits that go in through the current administration of the United States, all sorts of really horrible business ethics that happened there. Business ethics are important, and professional ethics are broad, and they usually tend to be domain specific as soon as you have ethical concerns in that domain that don't present elsewhere. The reason computing is so problematic is that medicine, and clergy, and business, some of all of that stuff were around for 10,000, 15,000 years in good form before the Hippocratic Oath even came out.
If you look at Greek medicine 200 AD time period, that was scary. It's not how I want to be treated by a physician, but still, it was really important to think that they had 1,000 years of medical practice through shamans and all sorts of stuff where people had to trust that that person wasn't going to hurt them, but was there to help them. I don't know anywhere in the computing world where someone says, "I met a computer scientist from Lyft, or Uber, or Pinterest, or Google, and I'm pretty sure that everything that they do is to make sure that my human rights are maintained, and that my life is better, and the quality of my life is better. That's their purpose. When they write code that I interact with online, that's their goal."
I can't name any computer science professionals that have even made that commitment to themselves. That's a huge problem for our industry. It leads to some really interesting things, some that are just outright batshit crazy.
VW Emissions
Has anybody here made a bad ethical decision before? Yes, we're human. To err is human. The reason that we do these things in groups is because we tend to not err all at the same time. It doesn't take a good actor to see a bad actor. That's the beauty, you don't need to be a profoundly ethical person down-to-the-core fundamentalist to be able to say, "You're being a dick. This is not ok. This has bad consequences." We're really good at finding bugs in other people's stuff. That's the important part.
One of the reasons that the VW emissions scandal is so concerning is because it was an entire team that orchestrated the violation of the spirit of the law there. The goal there is maybe we shouldn't shit on our planet. That's the premise. In order to do that we burn fossil fuels, but maybe we should burn less of them over time. We should be more efficient about how we do that, so we have these emissions regulations that say cars should behave a certain way, and they shouldn't be coal rollers. I'm from the East Coast near West Virginia and we have this thing called coal rolling which is this horrible thing. People put exhaust on their diesel trucks, and when they drive by a liberal they'll hit the gas really hard and shoot all the diesel dust out of the top of the thing. Pretty sophisticated and mature.
The goal is to say, "We can't make cars that do that. That's really bad for the environment. It causes all sorts of problems." In those places, sulfur emissions cause acid rain, all sorts of nasty stuff. The VW emissions thing is a group of engineers that see a goal of producing emissions, and building an engine and a chassis to a specific standard to not exceed those thresholds, and systematically coding it so that they know that they could violate the law while testing. I would say that's how you do test-driven development. It's awesome. Test-driven developments make sure you pass your tests.
The problem is, articulating your tests as the spirit of your actual goals is a very hard problem and almost never done well. People are in jail now because of this. It's actually one of the better outcomes of all of these things.
Uber Greyball
Another one, I tried to get a little bit more and more complicated, but everything here is multi-dimensional and interesting scale. Uber is the company that pays you to drive the cars just looking at their quarterly statements. It's a really interesting business model, but they had this problem.
Did anybody know about the Greyball issue? I'll structure it as the premise and the complaint and then rewind to understand why this needs to be incorporated in software development lifecycle. Greyball was, in a nutshell, I want to run Uber in Portland, Oregon, or Seattle, or wherever it was, somewhere in the Pacific-Northwest, and I'm going to do it. The regulator is, "You're not allowed to do that," and you're, "Guess what? I can detect who's a regulator and I can show them a different view of where cars are, like none of them." When you're a regulator there are no cars, but I can open an Uber app and get an Uber in two minutes. Is that scene maybe wrong?
How could you possibly build a feature designed to avoid regulators? We could tell you, another thing is that Uber was in San Francisco long before that. Taxi drivers were very angry about this because it was stealing from their opportunity. It is somewhat a fixed pie. You can't just create more people that need rides. There's a fixed number of people that need rides that undulates, and now you have a competing market. Has anybody ever tried to find a taxi cab? Really hard. If you try to find an Uber, it's really easy. You open the app, you see exactly where their cars are, and you can throw rocks at their windows, which was what was happening.
There were threats to physical safety of Uber drivers by someone, you can imagine that would be taxi drivers and their friends. Uber engineers created a way to misrepresent where cars are on maps to keep their drivers safe. One of those ways happened to be to remove the car, and a different unit inside of Uber got ahold of it and did some pretty diabolical things with it, like avoiding regulation. It's a really interesting thing when you say, "How can you build a thing like that?" It turns out they built it for ethical reasons. They were trying to protect individual people, so that's a complicated one.
Strava Global Heatmap
This one, I would say, is naiveté. I think that this has all been fixed quite a while. Does anybody remember the Strava Global Heatmap issue where you could identify military bases? Strava had this, it's like a Fitbit. Strava allows you to track your exercise, running and biking in particular. There's two issues with this, one of them is the one that was publicized, and the other is the more, I think, nefarious one. The idea was that all of these military people, it turns out they work out a lot, part of their job. A lot of them were using Strava, and because the global data, you could see who else was running. You couldn't see them by name exactly but you could see what their actual path was so that you could run similar paths, you could compete against people anonymously. It was really cool, except that secret military bases that had no names had dark circles with all the troops running on them, and you could see it from anywhere.
Everybody was, "National security issue." The last place I'd want to target to pick on is a military base filled with people who exercise a lot. That is not a good target. However, the other implication of that is that with a little bit of statistical analysis you can figure out who rides their bike, how fast they ride it, how long they ride it, and where their trail ends in their garage, which gives an incredibly good estimate of how expensive their bike is. It's a global theft map for expensive bicycles.
There's a lot of different ways to do that. There is no doubt in my mind that there was not a single Strava employee that was like, "I need you to build a theft map," or, "I'm going to disclose national security secrets." Pretty low-hanging fruit there. It was a dumb way to do it, because it was so exposed. Ideally if you're going to expose national secrets you expose them to select few and not to the world. This is an implication of privacy. When we think about privacy a lot of times we forget what exactly privacy is, and I'll try to get to that as an important point at the takeaway at the end.
Technical Concepts Hand Soap Dispenser
Another one, and the last one of the anecdotes are technical concepts hand soap dispenser. This has been an ongoing viral thread, there are TikToks around it, and Snapchats around it, and YouTube videos and stuff. In the original video was someone in a hotel bathroom or an airport bathroom. It was an African American person and they couldn't get the soap dispenser to work because it wouldn't recognize the tone of skin. Everybody was laughing in the video, they thought it was pretty funny. I think that as an individual thing, especially if you're a software engineer, that could be all those integration test comics where you're, "I tested the faucet and I tested the plumbing," and then the faucet is on upside down and sprays everyone. It's, "Integration testing, we screwed up." It's kind of amusing.
The problem is that that has really wide social effects. The problem was that it clearly was never tested on anybody but white people, which is a horrible QA problem in the first place. There's a great way to have it accidentally only ever have it tested on white people, and the best way to do that is to have a team of only white people because they don't think about it that much unless someone is exceptionally woke on that team. It's not part of it, especially when you're trying to code the eight-bit microprocessor on there to keep it down in [inaudible 00:19:49]. You're focused on a micro problem, you're building this thing out, and that's never really a thought-out consideration of that team, and that's the problem with that.
What Now?
These are all anecdotes, so the real question is, what do we do about that? The first step is taking responsibility for that. Is this really my problem as a software engineer? Normally my presentations are littered with F bombs but I saved it just for this slide. It is your fucking problem. This is your problem, you are writing the code that influences people's lives. More and more you're building global platforms to help people shop, to help people study, to help people get medicine. All of these things are directly impacting people's lives.
The worst case scenario is that you target people and make them not eligible or exclude those people from these innovations. It's not far behind that to think, "I enable a certain set of people while not enabling another," or, "I give a certain set of people an advantage over another set of people." One of the big problems that we have in the world is have and have nots, and you can create and emphasize those divides through, sort of, reckless software engineering.
Clearly it's not only your responsibility. I've given this talk at SREcon, which was held here, and it's in Asia, and all over the place. One of the interesting parts about being a cyber liability engineer, which I interface a lot with, is that they end up delivering, or being instrumental in the delivery of those software platforms to the world, so all of those individual components that were worked on, they see them glued together and actually pushed out to the customer. There's this concept that the SRE is in a really good position to be the watchtower for these sorts of things. When you've built a system that helps people not get rocks thrown at their heads for driving an Uber and they didn't see it deployed in this other way, they are responsible for the reliability of those systems, so they end up having an access and a visibility into the platform that gives them a great opportunity to say, "That's fucked up. That's not ok." It needs to be supported by everyone down below. The last thing that you should ever do is say, "I just wrote the code and I don't care." You're in the wrong talk if you don't care.
One of the really interesting things about this is when you start thinking about, "What is a bad ethical consequence of something?" This is a really hard question, so I like to simplify it in an incorrect but an incredibly good answer, which is a bad ethical consequence is when you violate or deteriorate a person's human rights. There are other ways to do it, but turns out there are a lot of human rights.
We had this really horrible war, caused the fracturing of Europe in the 1940s, you probably heard of it. Out of that we had the Universal Declaration of Human Rights. It was actually pioneered by people in the United States as well as throughout Europe. It was signed off by everybody. It's called the UDHR. If you haven't read it, it's not long. It's not perfect. It's really good, talks about the things that humans should always be allowed to do, always. It's a human right to seek asylum. It's a human right. I don't really care about U.S. policy because at the end of the day we're all humans. Humans have the right to seek asylum. Humans have the right to privacy. Humans have the right to apostasy. You're allowed to leave your religion. Those are human rights, inalienable on the globe. I'd like to think, outside the globe, when we go to another planet we can take those rights with us it would be really good.
The idea of building software that makes it hard for someone to realize their human rights, or allows another person to disenfranchise someone's human rights is consequentially shitty ethics. That's what that is, so keep that in mind. Privacy is a human right. This is a huge conundrum for our industry. Privacy, think about what you bought on Amazon. Should that be private? I hope everyone is "Yes. Have you seen the 'My Little Pony' shit I bought on Amazon? This is really awkward." I have three daughters, actually never bought "My Little Pony" stuff on Amazon, but I'm sure I bought some pretty embarrassing things on Amazon. I've done online video watching, all the things that you might do. You would think that those things should be private.
Now we have large organizations whose entire profit model is to take your behaviors online, and codify those into manipulative tactics. Manipulation is not always bad. You can take someone's interactions online. You can codify those into helping them stabilize their mental health. You can give them indicators that, "I think you need to talk to a therapist. You're talking dark right now. Are you ok?" Those are things you can use behavioral indicators for. The other thing you can use them for is promoting products. "Hey, you've looked at these 25 books?" and "I think you should read this book." There are a lot of implications in that.
A lot of people think immediately to the machine learning aspects of that is that, sure, I read all these books and then you recommend this other book that everyone who looks like me reads because you've already told the other people who look like me to read the same thing. It's all self-reinforcing. I don't get a breadth of opinion. Sure, that's a problem. The other problem is that they have this data and they are not immune to compromise. Consequentialism is about outcomes. I don't care whether Amazon got hacked and intended to release my information. I care that my information got out, and by storing it in the first place you put your users at risk for privacy disclosures.
NordVPN, great example of something that just happened. The whole concept of NordVPN is for people to be anonymous, to hide what traffic they're doing for their own privacy. That's a human right. They said they didn't log anything. Turns out they were pretty good about that. They got compromised, so all of the data that they had available around every single user's interactions was all exposed. Their entire purpose was keeping your privacy, yet the decisions that they made – and I don't have any insight into what those decisions were. From an engineering perspective I can think of a whole bunch of reasons why I'd want that, why I want where people go and all that stuff, because I need to do network optimizations. I need to be able to understand exactly what the bandwidth between different mesh nodes are. I need to be able to handle capacity in certain regions versus other things. I need to understand demand for different types of VPN services.
By collecting that, I have the potential to expose privacy information for users. It's a serious consideration. This is much worse in organizations where their entire revenue model is dictated on collecting private information. It's one of the reasons security is so important. At the end of the day if you've got the goods you have the liability. We have a huge issue in our industry that that liability doesn't actually manifest in any way. I was a victim of the TransUnion Experian thing. All my credit, all my Social Security number, I'm sure they're out there. If anybody needs my Social Security number, you can get it online. That's a huge problem.
That whole industry is built around violating privacy, the entire industry, in order to optimize lower credit exposure for credit lenders. The question is, is it ok to sacrifice people's fundamental rights for better risk management on a corporation side? I'll leave that up to you. That's an ethical question. My answer is fuck no, but those are the ethical questions. Again, it's not here to tell you what's right and wrong. It's about if you're not asking that question during the building of that product, during the offering of that service you're doing a disservice. You may come to a different conclusion to me, which brings me to the next aspect of this.
Forecasting the ethical consequences of things can be really hard. I mean, you need a crystal ball. You need to be able to tell the future. All that requires doing is anticipating the consequences to human beings from what you're going to build as an organization – not as a person, as an organization. Caveat, human beings that don't look like you, they don't act like you, they don't live in localized society like you. This is a huge motivation to build out diverse teams, because people from different backgrounds are never going to cover all your bases, but when you start covering multiple bases you have an avalanche effect. You have a snowball effect there where you say, "This doesn't work for my skin color." It's not going to stop with, "Let's fix it for African Americans." It's going to be, "Maybe we should test all the skin colors." The next question comes from that.
If you start having different perspectives, especially cultural perspectives, different socioeconomic perspectives on problems, it starts a line of questioning. The point there is to start that line of questioning and take it to some sort of reasonable and affordable conclusion. I will say that the software development lifecycle, I know that it's in the abstract of the talk. It's really hard to talk about software development lifecycles in general, but the important part is that whatever yours looks like or happens to look like today that the ethical considerations, the "Is this ok? Should I be doing this? What are the consequence of this? Who could this hurt? Who could this unfairly help?" Those are hard questions and they should be asked.
Start the Conversation
That one has six, that one has nine, that one has seven. That one also has six but they're different than the other six. This is what software development really looks like. It's just like "whatever". The important part is that all of these things end up being cycles because people keep their jobs for a long time and they write more than one thing. The cycle needs to have in it a concept of what are the ethical consequences of what we're doing here, both on a micro level in the code that I'm writing, and if you're like me, I write low-level systems code most of the time. I'm writing block caches, and device drivers, and things like that. It's a little hard to think about what the consequences of that are, which is important why I, in the software development lifecycle is a part of – I don't know which one has this here, but the requirements, you have stakeholders.
Those stakeholders are using your software to build things. You're going to build crappy things if you don't know what they actually want to build. If you're building something in isolation from the actual requirements you're not really doing a very good job as a software engineer. You have to know what you're building is going to be used for. That is a perfect opportunity to ask, "Why is it being used for this? What's the market for this? Who are we going to affect? Who are we going to help?"
I will pose to you a hard question, because I think our industry is really immature. We've been at this for 50, 60 years in earnest. Everyone can argue when it happened, Al Gore doesn't even remember. In the early '90s is when the hockey stick of accessibility to global internet really happened. '90s, '92, '95, and then by '99 we knew were on this slope and it was going to go to the billions. All of that happened in less than 20 years, from '99 to '19. That's 20 years where we went from having 200,000, 300,000 people online to billions of people; global access to that. That's not enough time to develop an ethical policy even if you're not growing like that. That is a hard problem.
I didn't come with a lot of answers, I came with a lot of questions. I will say that the ACM, the Association of Computing Machinery, arguably the oldest professional society for people like us, they have an ethics policy. It was old and broken from 1992. It was updated this past year. I think it's pretty relevant. Again, none of this is perfect but I think it's pretty darn good, so I would recommend reading the ACM Methods Policy and then asking yourself on a daily basis, "Is what I'm building going to improve the quality of life for people?" Because if you start with that it's a little hard to get completely on the wrong road. You're on the right path and then you just need course corrections. It's not like you're going the wrong way. You could argue if you're building things to intern children, maybe that's not a really good thing to be building. I'll get to that in a second as well.
ACM also has an Ask an Ethicist. Some of these questions are really hard. If you've ever asked an ethicist, it's not even an "I don't know." It's a, "You should," and then it's stuff that's "I am more confused now than when I started," but ACM has an Ask an Ethicist. You can go to the Ethics page where it has the policy. There's an Ask an Ethicist, you can send in an email and you'll get the actual ethicist at the ACM. They will brainstorm around what you should consider. Again, ethicists are not there to tell you you're right or wrong. They tell you how you should think about approaching deciding whether it's right or wrong. The most important part is find somewhere in the cycle, rapidly, to reevaluate, what are the ethical implications of what I'm building?
Dissenting
One of the most important aspects of any of this is that when things go wrong, when you are in a position where you are asked to build or find yourself building something that is unethical, what do you do? It's really easy. I look around and I can't guess the privilege of the people in the crowd, but I'm going to guess you make a lot of money. I'm going to guess you have a savings account. Unlike a lot of jobs, in computing a lot of times you can be, "Y'all are bad people." Then you go on Twitter and you're, "I have a job because my previous employer wanted to flay people alive," or whatever it is that they're doing, and you'll get a new job.
In computing we have a lot more privilege than other people's industries. However, there are a lot of people that work in this country where they're beholden to their employer for their visa. You can't leave or you get kicked out of the country. You have a life here and that's pretty inhumane in general. Resigning is not always a legitimate answer to that. That's not always acceptable. You have mouths to feed at home. You have insurance to provide your family, especially if you have a sick loved one. These are not easy ethical questions. Ethical questions do not often have black and white answers.
I'm an IEEE member as well. Not a huge fan of the IEEE. I think it's a little bit mechanical and a lot of stuff there that doesn't need to be there, but they have a guide to dissenting on ethical grounds, and it is brilliant. This is it recapped. I have a link to it in the next slide. First thing is never think you're the only person who sees the ethical problem. There is someone else in your organization that sees it, and if they knew about what the details were, there would be more people that saw it. Don't ever think that you're alone there or an outsider. There are people who probably see it and think that it's ok – again, people have different ethical conclusions for things. As I like to say, some people hate other people. I'm not one of them.
Understand that ethics are a spectrum, so it's really hard to say, "This is wrong" or "This is right." Is upholding laws ethical? In general we believe in a society that operates, a society that operates has to have laws. If you look at Western philosophy, those laws are enforced by giving the government a monopoly on violence. You're not allowed to punch somebody in the head or shoot somebody but the government is actually allowed to incarcerate people and they're allowed to, in many places, execute the death penalty for violations. We've given the government the responsibility for implementing those laws, and as citizens we're supposed to abide by them. As government contractors we're supposed to adhere to them.
It turns out some of our laws violate human rights. At some point you're going to have to have a hierarchy and say, "I think our laws are more important than human rights," or, "I think human rights are more important than our laws." I will go on record and strongly encourage you to be on the human rights' side. We fought a big fucking war about it. It's important stuff.
Understand that they're a spectrum, so not everything is black and white. Things are very complicated. Navigating that, it helps a lot to keep records. If you're going to fire someone or you're going to be fired, you notice people start taking a lot more notes about everything that happens. They start to document everything. This is not a case of firing or being fired. This is a case of recording what you were asked to do, when you do things, when you were asked to build, what the team built, with what knowledge they built it. All of that stuff, you just keep those notes. Those are your private notes. They're very important.
Then, when you go to build a defense that says, "I don't think we should be doing this," you have a lot of material there to actually build a dispassionate defense. You don't want me defending you because I'm just telling people they're assholes for not respecting people's human rights. That's not what you want. You want a list of laws and a list of human rights violations that are potentials here. You just want to be able to lay it out and lay it out so that a jury can see it, not a legal jury but a jury of your coworkers where you can make a case and you can say, "You're right. I think what we're doing is wrong." Hopefully management says that, too.
Then, you work the system. A lot of these things are worked from inside the system. Is paying women less systematically than men a good thing or a bad thing? I happen to think it's a bad thing. There are workers inside of Google and inside of other organizations that have stood up, they've organized. That is working the system to try to effect that change. Sometimes there are consequences for that, so it's really difficult to work the system. Sometimes you get fired for that. Turns out getting fired for that particular thing is illegal, so it's not a good thing, but it's a good thing that it happens because then it actually ends up going to court.
The last thing you can do is you can resign, but I think that this is spelled out really well in the IEEE dissension guidelines, which I have no idea. I have only ever been able to find it at iit.edu, but it is [inaudible 00:40:32] document.
Questions and Answers
Participant 1: Thank you very much for your time. This is a great speech. I think it's a really important topic. A lot of the examples that you mentioned, I think, are consumer focused, and so I have two questions. The first is that, within large organizations, I think, you've spoken primarily about the role of the engineer to make decisions about ethics. It's my belief that that's a broader conversation. Organizationally, are there other groups, I think of customer success, that are outward-facing functions to represent the customer and then bring that into the software development lifecycle? Can you think of any other groups that they should be collaborating with on a regular basis in order to change the development?
Schlossnagle: I think that's the mystery of devops. Devops was never about dev and ops, it was about agile lifecycle throughout the entire organization. I completely agree. Customer success is there, product management clearly, because they're building and choosing new features, often are aiming at one goal and the other goals, which could be very bad, are out of focus so they don't see them. Legal, fantastic resource to have because when you start to spell out liability for legal it's suddenly in all caps, and then things tend to happen.
Then, engaging customers themselves, which can be tricky. A lot of the times you don't have access to customers, but engaging customers is useful as well. The one thing you said is that I'm saying that engineers are there in a position to make a decision. It's more important that engineers are in a position to ask a question. Asking that question is not just in engineering. You surface that question throughout the organization as you see them more and more important. If you ask a question, you get a really good answer, leave it at that. It's not supposed to delay the cycle, it's supposed to open the iris on that so that you can see it all. As those questions don't have good answers, it makes a lot of sense to start going into product management, customer success, and legal, and things like that.
Participant 1: Then, the second question is more related to the unit at which you're designing for. I work in a B2B context, design manufacturing software, and so we'll look at the optimization or utilization of a workforce for a particular warehouse. I work in aerospace, so we're looking at how many planes can we manufacture in a given month. A lot of the things that we produce or simulate [inaudible 00:43:22] so we could say, "If we increase the number of shifts, or if we increase the number of hours," I think eventually you could get to quality of life, but I think it's really hard in a simulation environment to say that you are or are not violating human rights. I don't think that we are, but if you are in this green space environment how would you recommend operationalizing [inaudible 00:43:45]?
Schlossnagle: I think operationalizing it is really about putting the questions into the cycle, because you don't have the answers. I have sort of a morbid curiosity into the Boeing 737 Max stuff as to how those decisions actually wound up in a product flaming down into a desert floor. It's horrid, but from a software engineering standpoint you can imagine that there wasn't an individual discussion that was like, "If we rush, we could kill people." It's always, "Fuck it, let's do it." That was really unlikely, especially in groups of people. You just can't get a group of people together unless there's some sort of [inaudible 00:44:30] to do that, which is why it's so important to have diverse groups of people so that you have somebody who's "I don't think that makes sense."
Even in aerospace you're looking at software quality. What are the ramifications? Some of it is as simple as a software quality question, where your software is never going to be perfect so it's a stupid goal to have it be perfect. How many bugs is enough bugs, which is a huge, hard conversation to have with an organization. When the bug causes loss of money, or loss of life, or loss of a shopping cart, those are three different things, and you can dial that in. The question is, why are we dialing it that way? If someone says to rush on something that's a medical device you can say, "Is that the right thing to do? Does that make sense?" I think it's really exploring the question space that's more important.
See more presentations with transcripts