BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations Why Should We Care about Technology Ethics?

Why Should We Care about Technology Ethics?

Bookmarks
46:38

Summary

Catherine Flick looks at the recently updated Association of Computing Machinery's Code of Ethics and Professional Practice. She reflects on the ethical issues, the ways to think about our job through a lens of ethics and responsibility, and tells us what developers can do to deal with ethical issues.

Bio

Catherine Flick is a reader in computing and social responsibility at the Centre for Computing and Social Responsibility at DMU. She is work package leader for the European funded projects COMPASS and Living Innovation, which look at integrating ethics and responsible innovation into SME and large company business practices, and has a long history of working on European projects in this area.

About the conference

Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.

Transcript

Flick: My name's Catherine [Flick], and I am a reader in computing and social responsibility, which up until about two years ago, nobody could really work out what that meant. I'm at the Centre for Computing and Social Responsibility at De Montfort University. My background, I just want to give you sort of a little bit of a background to explain why I feel vaguely qualified to stand up in front of you and talk about this sort of stuff. I have a Computer Science degree from Sydney University. I have then done stuff on computer ethics for my Master's and Ph.D., and I'm now a member of the Committee on Professional Ethics for the ACM, as well as doing a whole bunch of work into things like responsible research and innovation, which I'm going to get into in this talk a little bit, that's my background. That's why I'm here, I guess.

It's been very interesting seeing how ethics has come into the mainstream discussion. About 5, 10 years ago, there was no way you would have ever seen me standing up in front of such a lovely big audience as this because people were saying basically, "What? You want to talk about what at my technical conference?" What I really love about the fact that it's now come into the mainstream is that I can actually come out and talk to you, not just about the kind of philosophy stuff, which I really love, but also the kind of practical applied, how do I actually do this stuff? How do I actually get stuff in and get it done? I may be an academic but I don't really see myself as sitting up in the ivory tower. I want to come down and actually work with "real people" to actually implement it in their daily lives. Most of my other research projects are about working with industry. I've worked with SMEs. I've worked with big organizations. You'll see some of my projects that I'll do a little bit of a plug for at the end, and you can get involved too if you're interested.

The question that I got asked five or so years ago was, "Why should I care about ethics? I'm just a software engineer. I just do my job. I just write the code. I just test. I just deploy. I'm just a sysadmin. Why should I care about this stuff?" Obviously, we've got things that have shown up over the last couple of years, which have made people start to realize that, "Actually, yes, ethics has started to become an issue, not just for my company but it's actually becoming an issue for me as an employee of a company." We've seen instances of employees pushing back against organizations that they consider to be doing unethical things. We're seeing the general public pushing back against unethical uses of their own data, for example, in the Cambridge Analytica cases, and we're seeing governments, particularly in the Cambridge Analytica case, we're seeing governments pushing back against big tech companies that are taking advantage of the gray areas of the tech innovation sphere.

This is one of the many reasons why people have started to care about it, particularly in the last couple of years. They're starting to realize the stuff that I'm doing, whether that's being a sysadmin or testing or developing or project managing or whatever it is that I do in my daily life, these actually have bigger effects than just me. It's becoming more of a thing that I have some input into. All companies, not just big companies, so most of those companies are fairly large ones. They're the ones that tend to get talked about in the news and stuff like that because everybody's, well, most people have heard of them, but based on the research and the work that I've been doing with smaller companies, it's not just big companies that need ethics, it's smaller companies that need ethics as well. All companies need ethics. This is to ensure essentially market acceptability.

You may be on Facebook or LinkedIn or something, even though you don't like Facebook or LinkedIn because you feel obliged to be on it because of the peer network or this is the only way to get my CV out there or whatever. That's what we call social acceptance. You've accepted the fact that you need to deal with that technology, you need to interface with that technology. However, social acceptability is the idea that people will want to use that technology. In order to get that, you need to basically build trust with that group of users, and you need to build in things that they actually want to use and don't just feel kind of obliged to use. You'll have happy employees because happy employees doing things that they feel are doing the world some good are going to be much more productive and much more beneficial to your company. You'll have more loyal customers because they actually want to use your products or service or whatever it is you're creating. You'll have potentially longer-term profits.

We're still in the early stages of this. It seems like a logical idea that if you have all of the above, you'll probably have longer-term, certainly more sustainable profits, if not higher profits. You'll make the world a better place, which generally most tech invaders want to do, and you'll be able to sleep at night, which is comforting for all of us. I'm not sure how Mark Zuckerberg sleeps but anyway.

General Issues We Need to Address

However, there are some general issues that we need to address before we get to those spaces. One of those issues is that basically, we've been working in a gray area. We've been working in the Wild West for a long time now. When I grew up with the internet, I think I got the internet first when I was about 15 or 16, and it was very Wild West back in the mid-'90s. It wasn't regulated. You could do pretty much whatever you liked on it.

For the most part, up until the last couple of years and, certainly still in some tech innovation areas, it's still quite unregulated. There's still quite a lot of gray area, and you're seeing a lot of the big companies who have taken advantage of those gray areas build their monopolies and kind of use that as a basis to in some ways squash perhaps more smaller companies that are doing things the right way coming up through the ranks, let's just say. You also, as tech people, as software engineers or sysadmins or whatever, you don't have to be certified. Unlike, say, a civil engineer who needs to be certified in order to build something, you guys don't need to be certified in order to build quite important and mission-critical infrastructure.

There's an increasing reliance on computing technology. As we've seen, the more infrastructural it's been getting, the more likely that there are points of failure that could really have significant impacts on society. You have increasing dissatisfaction with these technology giants. We've seen that where people have been moving away from Facebook, people are starting to push back against using Google products, etc. There are policy vacuums. People are still taking advantage of some of these gray areas where there's no policy. Up until the GDPR came in, for example, people are still doing fairly dodgy stuff with data. Now that GDPR is here, they've had to finally work that out and actually deal with those policy issues that have come in. And there are still many unknowns. We don't know what the future's going to be like. We don't know what the next destructive technology's going to be. We don't know what the next kind of regulatory environment is going to look like.

Speaking of regulation, this is what a lot of companies are worried about. They are worried about being heavily regulated because of the fact that they see that as a roadblock or a speed bump or whatever you want to call it to innovation. And what I really want to get across in this particular talk is that certain sorts of regulation, yes, they may go a little bit too far and particularly, if you've got people in power that don't understand how technology works. I'm Australian. If you've been watching the Australian tech press recently or the tech policy recently, you'll know they're not very savvy as to how cybersecurity and encryption works. That sort of regulations, obviously, a real problem, but then we have things like the GDPR, which, love it or hate it, has actually perhaps moved things more towards what society would expect technology to function, how they would expect it to function, but you want to be in control of this regulation. You want to be driving the regulation, not letting people that don't know anything just dump it on top of you, so you need to become part of this process. This is also where I want to go a little bit in this talk.

Like I said at the beginning, ethics is hip again. Everyone is jumping on the ethics bandwagon. I've seen startups, I've seen all sorts of interesting initiatives put out by big consulting companies, by small consulting companies, by various universities and things who have suddenly realized that perhaps they should actually be teaching ethics in their computer science curriculum. This is very interesting to me because I've been doing this for many years now and it seems to me that a lot of these activities are essentially reinventing the wheel. What I find frustrating about that is that we shouldn't be reinventing the wheel, we should all be working together, but the problem is, is that a lot of the ethics talk has been stuck in academia because none of the tech people wanted to talk to us. So we're now coming out, we're saying, "Hey, tech people. Come and talk to us. Let us help you get 10 years ahead of reinventing the wheel where you're starting and actually take in some of this latest stuff that we've been doing, that we've been working with people to actually make practical, but also philosophically relevant and logical and all of the things that you need for ethics to actually work."

The Big Questions

The big questions that are the questions, I think, that are important to consider more generally, these are bigger picture questions for the tech industry. Should the tech industry be more heavily regulated? Should it? I would probably say yes and no, because I think the tech industry as the whole shouldn't be more heavily regulated as a whole, but there should be individual perhaps bits and pieces of regulation that really help tech and society to come together to agree on how it is that we want our society to progress and how it is we want our society to kind of integrate with technology, because technology is not separate from society. Technology affects society, and society creates technology to solve problems and things like that. If we can make regulation actually work for both of those things, then it will work. However, if you just say, "Oh, well. All tech professionals need to be registered and certified," how do you even start to define that? That gets really complicated and you can't do that.

So my answer to the next question, "Should IT professionals or practitioners require certification?" would be a no, although that's probably going against the ACM's view of things, but personally, I think that certification for a general technology population is not the best way to go, but you could have certifications for certain sorts of groups of people. Say, for example, if you're a software engineer that works on software for critical infrastructure, for example, a bit like a civil engineer, perhaps there could be a level of certification before you become involved in that sort of infrastructure development. That's another question for another time. I'd be happy to talk about that later.

Another big question is, "Why are all these groups reinventing the ethics wheel, and what are their motivations for doing so?" Because, that's also a good question to ask about are they just trying to make a quick buck off this new ethics bandwagon, and are they just selling you a solution that you can just say, "Yes, I've checked that ethics box," or are you going through the processes within your company to really integrate ethics into your everyday business practice?

Then, the biggest question of all. How can we design, develop, and deploy our technology responsibly? I would also put maintain in there, which I forgot to put in. How do we actually do this responsibly? This is what I'm going to mostly talk about now.

Responsible Innovation

Responsible innovation, which is kind of a European Commission umbrella term, that mostly covers all of the research innovation that they fund, particularly. This is not just academic funding but they do a lot of shared funding with industry as well. It's an umbrella term for ethics, sustainability, diversity, open access, public engagement, and science education. All of these things they consider to be a key component of innovation. These should all be things that you should be engaging with if you're involved at all in research, so R&D-type stuff, or if you're involved in innovative development as well, if you're an innovation company.

The way that we've, sort of the academics, I guess - this is my bit where I come down from the ivory tower and I tell you what we've been up to for the last 10 years - the way that we conceptualize this some of the best ways that you can deal with these particular problems of ethics, sustainability, etc. is to anticipate what the potential impacts of your technology or your innovation or your research might be. What might happen in the future? I'm not saying stare into the crystal ball; I'm talking about plausible things. This is not just what problem do I want to solve, but what actually might the further impacts of that solution would be?

What social environmental impact might our tech have? Environmental is coming really quite heavily into this now. Sustainability wasn't on the original European list, but we've now stuck that in there because we think it's really important. And the way that you can do this is using things like foresight exercises, misuse cases, not just use cases, but how could our technology be misused, and this idea called "design for evil," which is a cool concept where you take kind of the sorts of things that you're doing and you're like, "Well, how would the evil government employee use this?" or, "How might the evil company deploy this?" or etc. You design for evil just to see how it is that things could be misused. You look beyond that intended use because everyone has ideas about how the technology is going to be used, but often they don't think beyond that. They have a very narrow focus. “Oh, we're solving this problem”, but what might the actual further effects of that be?

The next stage, the next thing that you can do is to reflect on what is your purpose in doing this. What is your motivation? What problems are you solving? Are they problems that exist or are you just creating a solution that's looking for a problem, which happens too often in tech, unfortunately. What ethical impacts might our technology have? I'm going to go into how you do that in a little bit. What don't we usually think about? Do we have any blind spots? Are there some perspectives that we could get that could potentially open up more areas for discussion about our purpose and our potential impact?

The ways that you can do that is using things like standards, you can use codes of ethics, and then you can also use engagement. This is where this social acceptability comes back in. If you engage with diverse sets of users, intended users, but also perhaps unintended users, perhaps people who aren't direct users but are impacted by the use of your technology, if you work with those people, you're more likely to get more perspectives that will help you to determine what the impact of your technology might be. The ways that you can do that include co-creation activities where perhaps you co-design some software or products or service, and we're actually working with some big companies at the moment, including Telefónica and Atos to actually do this sort of exercise, Siemens, as well. User feedback, obviously, that's a fairly classic thing, so doing user testing and user feedback gathering exercises. That's a classic way of dealing with this, as well. Broader diversity initiatives. The more diverse the team, the more likely you are to pick up issues with your technology before it gets out to a broader audience. And dialogues with all stakeholders. If you have those conversations with people who might be affected by your technology, you're more likely to pick things up earlier on then and avoid issues later on.

And then finally, action. How does your workplace support all these activities? This is where you can look at things like HR practice in terms of things like recruitment. You can look at the workflows, the project management kind of aspects. Do you give space to have these sorts of engagement activities? Do you give your employees space to actually go and think about the ethical issues and do you have processes by which they can report potential problems without getting fired? How are your big decisions being made? Do you have the people who might be affected by those decisions come and help you make them, or do you just kind of do them by yourself? Also, where do you get your funding from? But ethical VC is a completely different story for another time.

Association of Computing Machinery

The ACM, so this responsible innovation thing is where I'm focusing how we conceptualize the practice of ethics, if that makes sense. How we conceptualize making these sorts of approaches practical and hopefully useful, and there are many tools and things that you can use to help you do those particular steps, which I'll link to at the end. The ACM, if you may or may not know these people, obviously, it's the Association for Computing Machinery, it’s a professional organization for computing professionals with quite a few members. When they say computing professionals, they try to encompass a very broad definition of that. Certainly, in the Code of Ethics, we have gone for including things like students, aspiring computing professionals, anyone that uses computing in their day-to-day work. That could just be finance people who do data entry or manipulate Excel spreadsheets or whatever, we would consider them to be computing professionals because computing pervades everyday work life for a lot of people.

The ACM's front page has a very lovely aspirational statement where they say, "We see a world where computing helps solve tomorrow's problems, where we use our knowledge and skills to advance the profession and make a positive impact." I think that's a nice way to start out talking about ethics from the ACM's perspective. We started out creating the ACM Code of Ethics because, two years ago, two and a half years ago now, we started rewriting the ACM Code of Ethics because the last time the ACM Code of Ethics was updated was in 1992. This diagram shows you what the current technology was in 1992. This was what we were using. We were using very old modems to get to connect to the internet. This was a webcam that was actually hooked up somewhere in Cambridge, I believe, pointed at a coffee machine. It was basically a Phillips fancy camera that they just made to take photographs of the coffee machine every so often, which was pretty cool but it was very hacky, and that was the current state-of-the-art web browser on the right-hand side there, Mosaic, which as you see, had graphics but was mostly kind of still text-based.

Back in 1992

Since 1992, things have moved on. Back in 1992, the profession, the ACM, the profession of computing, was particularly concerned about systems that were physically huge. We didn't have cloud computing back then, we had big computer rooms, and so they were worried about physical security. They weren't worried about cybersecurity. They were worried about, do you have the best locks on your doors? That's what they were worried about in terms of security. That was all they were worried about in terms of security. They were worried about infrastructural issues, which is still the same as today. Unfortunately, I think we still have the same problems that we had back then. They were worried about data, so collection of data, the use of data, privacy, that sort of stuff. They were worried about that back then too. That hasn't changed much.

They were worried, particularly, about copyright. Back then, there weren't streaming or subscription services, there were no bit torrents, etc. The way that you would pirate software back then is you would copy a disc and hand it to your friend. You might download something but it would have to be fairly small. Copyright was the big issue of the day. We saw that kind of develop in terms of things like the DMCA and all of the anti-copying of DVDs, etc. Contracts and law were still, was important back then as it is now, but the laws look very different from how they looked now, and we've changed a lot in the way that we interpret those laws, as well. They were worried about requirements analysis. They wanted to get professionals out actually talking to people about what they need, not just what they think they need, and so, yes, we still do this, as well, hopefully.

And they were worried about continuing professional development, so education and training, which is one of the kind of remits of the ACM, so that kind of makes sense that it would be worried about that and they're still concerned about that as well. They were worried about all of these classic ethical issues, so privacy, dignity, autonomy, trustworthiness, honesty, discrimination, and confidentiality. These were all core components of the 1992 Code of Ethics. I think it held up pretty well for a while, but things massively changed. It was very dated.

At the moment, physical security's still an issue but it's not the most pressing issue. Copyright and intellectual rights are now much more complex than they were back then. Data privacy is definitely more complex than it was back then, especially with things like machine learning, big merging of data sets, etc. and the anti-discrimination aspect that we had in the old '92 code missed a lot of stuff and we've moved on quite significantly as a society, in terms of diversity and anti-discrimination. I had a whole bunch of '90s jargon like computer viruses, which I think is quite sweet. Yes, it was very '90s jargon. We've tried to strip that out a lot in the latest one, but you can never really avoid it. It was very overbearing. It was very, "Thou shall not … Here are the rules and if you break them, you're out," kind of deal. It was very ACM-specific. We've made the new code a lot broader and aspirational for anyone who wants to become a computing professional, not just those who are actively members of the ACM or professionals who aren't as well.

It just needed to be updated, to be honest. Our values and our uses and our abuses of computing have changed. We care about different things now and we needed to update it according to that. And we had an enforcement policy that was completely unfit for purpose. If you grab me at the coffee break, I'll tell you why.

What we did is we got a bunch of people in a room. We did at various stages. We did three drafts of the Code of Ethics, each with varying degrees of public input. We had a massive discussion forum. We had a survey sent out to all 100,000 members. I had to do all of the qualitative data analysis. If you've ever done qualitative data analysis in your life, you'll know that doing 5,000 times 27 short questions is a lot of in vivo. Yes, that took a long time but it was very worthwhile doing, and I'm going to be writing papers for the next decades out of that, as well. We spent weekends in Chicago. I was the only non-American and also the only woman throughout the whole process. We had people drop in and out, but I was the only consistent non-American and female representative. This is our team. I was taking the photos, so I'm not in the team, but Don Gotterbarn, who's the older gentleman on the right-hand side with the beard, he was actually involved in writing the 1992 code as well so we had some consistency across from there. Bo Brinkman works at Google, Marty and Keith are both also professors at various universities in the States.

Some Major Changes: Copyright

Some of the major changes I thought might be interesting to you is that we've changed quite a lot. Particularly, I'm very proud of the changes that we made to the copyright aspect, partly because I wrote them. Well, I wrote the draft wording, which then got fiddled around with, but it was very, "You must obey copyright." We're much more nuanced these days. We're actually much more respecting. I would hope that we're more respecting of creative works and the people that create those works and we would want to respect their wishes, which is why we turned that on their head and we've changed it to, "Respect the work required to produce new ideas, inventions, creative works and computing artifacts." That's because there are lots of ways that you can deal with your creative work that isn't just copyright. It may even be things like you just stick it in the public domain and anyone can do whatever they like with it, but it also allows us to kind of be a little bit more thoughtful about how we apply things like copyright. In that particular principle, we have suggestions that if you are the holder of copyright or a patient or whatever, that you shouldn't unnecessarily restrict somebody from using it for a reasonable use, for example. So it brings in some of the sort of reasonable use policies that you have in various countries, and also it allows a bit more freedom to choose what you want to do in terms of dealing with that.

We also suggest some more aspirational things throughout the code, the new code, so things like how you can actually contribute to making things better. We talk about open access. We talk about contributing to open source. We talk about protecting work that's put into the public domain, so about things like the commons. How can we actually protect the commons because the digital commons is something that's arisen certainly since 1992, so things like Wikipedia, how do we make sure that we can protect those things and not rip them off and use them for our own purposes without their consent.

Social Media, App Stores, Uber, Etc.

Back in 1992, there was no such thing as social media. There was no such thing as app stores. There was no such thing as Uber and other disruptive technologies. If I have Tumbleweed. My last talk I had crickets there and someone came up and said, "What do you mean by crickets?" I thought, "Okay. I'll have to change that word," but yes, there was nothing. There was nothing in the '92 code, but in the 2018 code, we now have a specific thing that calls out. If you were doing something in infrastructure, if your garage-developed app becomes the next Uber and it becomes infrastructural, you have a specific and special responsibility to society to actually make sure that that is respecting of society's expectations of that sort of infrastructure.

You need to recognize and take special care of systems that become integrated into the infrastructure of society. Technically, things like Facebook, for example, would be falling fairly afoul of this, certainly at the moment, anyway. One of the quotes from that is, "Part of that stewardship requires establishing policies for fair system access, including for those who may have been excluded." This is because we had a lot of questions throughout this process about what do we do about app stores that delete our apps for no reason, right? If Apple kicks you out of the app store for seemingly no reason, there is currently no redress, there's no real way to appeal that. We're saying that if you're going to kick people out of a service, if you're an Uber driver and you get kicked out for no reason, for example, you should have abilities to question that and to seek redress.

The other big thing that changed was the focus of the Code of Ethics. In the previous code, we were looking at professionalism. We were very keen that professionals would act as professionals, which was striving to achieve the highest quality, effectiveness, and dignity in both the process and products of professional work, where excellence is the most important obligation to a professional. We still think that excellence is a really good thing to have and we still have some stuff in the code that kind of looks a bit like that where you should be professional, you should make the highest quality work that you can, but we don't think that's the center of where things should be now. We want to center the profession of computing around creating things for the public good. How do we serve society? We can have a very excellent quality piece of work that completely exploits society, and that would be okay in the 1992 code because it's about excellence. In 2018 that's an overhaul. From 2019 now, that's no longer what generally where the movement is in society. We want software and technology that actually serves society, not exploiting it potentially. Not just excellent work. We want excellent work that goes beyond that.

Then, we have just one more I want to point out. If you're an ACM member, and if you are you have signed up to this code - because that's part of the rules of membership - if you're an ACM member, you are required to reflect on ethical challenges in your work. This comes back to what Ann was talking about before. It's okay for you to ask these questions in your work and it's okay, in fact, we require you to do it. We require you to reflect on those ethical challenges, not just the competence of your ability to do, to solve a problem. Back in 1992, it was about professional competence. Can you do the work, and you should get involved in setting standards for that work if that's something that you're able to do. Now we look more at not just the technical knowledge and skills, but the ability for you to reflect on the work that you're doing and communicating with people about that work and being able to actually have a much more broader set of skills like other professions are required to have, and particularly in terms of recognizing and reflecting on ethical challenges.

Using the Code

If we think back to what I was talking about in terms of the responsible innovation, I had the anticipate, I had the reflect, I had the engage and I had the act. The Code of Ethics is a really good space that you can work on the reflection side of things. It needs to be used holistically. However, you can't just read the code once, think, "Ah, yes, I read the code. Now I'm going to be ethical for the rest of my professional life." You need to be engaging with it in the work that you're doing as a kind of a process. If you're designing or developing a new piece of technology, you should be actively engaging with a Code of Ethics to have a look at how does my technology actually reflect the code? Does it violate things? Is it positively contributing like is required of it?

Any technology you have, any dilemma, any major decision, method, even things like deployment, particularly if you're looking at things like deploying into new areas, like new countries or whatever, all of these should be carefully considered against the code. And I'm going to give you a couple of examples of how you do that. If you are an ACM member you have agreed to abide by the code, so it's probably a good time to listen on how you do that.

Diversity

How do you deal with diversity? Diversity's been a big thing that people have been talking about. We need technology to become more diverse. We need to have not just diverse employees, but we need to have diverse user groups for you to think about who's going to be using this, who might be impacted on it, by it. We've seen a lot of issues where diversity has not been particularly well dealt with, particularly in terms of machine learning recently, but the code itself has got quite a few aspects that you can think of in terms of diversity. This is by no means an exhaustive list. These are just the ones I could fit on the slide.

For example, in 1.4, which is “Be fair and take action not to discriminate”, there's a quote that says, "The use of information and technology may cause new or enhanced existing inequities. Technologies and practices should be as inclusive and as accessible as possible and computing professionals should take action to avoid creating systems or technologies that can disenfranchise or oppress people." The reason we put that in was because at the time we were writing this, there were a group of software developers who were standing up and saying, "I'm not going to build a Muslim registry for the American government." We really wanted to make sure we reflected that specific aspect because we can see more and more of the technology that is being used to oppress by governments, and I certainly would hope that most people here would think that that's not necessarily the best thing for the technology industry, and that we wanted to capture that.

Then we say, "Failure to design for inclusiveness and accessibility may constitute unfair discrimination." This is the clause that we got the most hate mail about, and we got at least three death threats that I know of because of the fact that there is a small group of people that are not really keen on diversity but we felt this is where we're moving to, and they're kind of dinosaurs being left behind, to be perfectly honest. We think it's also very important that this is recognized because we still have problems in tech in terms of diversity and we need to be addressing them.

Another one is maintain high standards of professional competence, conduct, and ethical practice. We say, "Professional competence starts with the technical knowledge and with awareness of the social context in which the work may be deployed. Professional competence also requires skill in communication, reflective analysis, and in recognizing and navigating ethical challenges." So this is about being aware of the fact that diversity is an issue and thinking about ways of overcoming it in your own work. Then we have 2.7, which is, "Foster public awareness and understanding in computing-related technologies and their consequences." Their quote from that is, "As appropriate to the context and one's abilities." We didn't want to make everyone feel like they have to go out and be a technology evangelist. "Computing professionals should share technical knowledge with the public, foster awareness of computing, and encourage understanding of computing. These communications with the public should be clear, respectful, and welcoming."

The reason this ties back to diversity is because if you've not got a welcoming atmosphere for people, then they're not going to want to engage and you can potentially lose out on diversity aspects. Things like gatekeeping, for example, I've been told many times that there was no way I could be in tech because I'm female. I worked in industry for five years throughout my early career and yes, I moved back to academia for a reason. Gatekeeping and things like that are really problematic issues, particularly in tech, and I see it a lot in video games and stuff, as well which is not a work that I do, but yes, it needs to be welcoming to people, not constantly questioning their right to be there.

Data Analytics and Machine Learning

Data analytics and machine learning. I've realized, when I was making these slides, I've picked on, somewhat unfairly, on Microsoft. Apologies to any Microsoft people in the room here, but they had not a great year in 2016. If you do any data analytics and machine learning, there are now specific things in the code that address what it is that you do. Apart from general privacy and data stuff, there wasn't really much that reflects the modern usage of data. For example, in the Be Fair and Take Action not to Discriminate, we have the same issues that we had before, which I talked about in the last slide. Respect privacy. It should be fairly obvious, but we also have a clause in there that says that, "Personal information gathered for a specific purpose should not be used for other purposes without the person's consent. Merged data collections can compromise privacy features present in the original collection. Therefore, computing professionals should take special care for privacy when merging data collections." This has been a problem that we're solving what I should hope is a fairly obvious problem there, because that just didn't exist back in 1992.

In 1.7 we have honor confidentiality. "Computing professionals should protect confidentiality, except in cases where it is evidence of the violation of the law of organizational regulations or of the code," which is a very important part." In these cases, the nature or contents of the information should not be disclosed except to appropriate authorities. A computing professional should consider thoughtfully whether such disclosures are consistent with the code." One of the important parts about the new legal clause, which I didn't put up there, is that if you are dealing with law and you think that the law is unethical, you have a responsibility to call that out and try to change or at least try to kind of change that law or make movements or express the fact that this is an unethical aspect that you're having to deal with. However, what degree you go to in that is up to your level of comfort, because the ACM can't protect people that go out and suddenly start breaking the law because they think it's unethical. You have to have really good justification based on the Code of Ethics to actually justify your choices.

2.5 is, "Give comprehensive and thorough evaluations of computer systems, their impacts, including analysis of possible risks." This is the really big one for machine learning. It says here, "Extraordinary care should be taken to identify and mitigate potential risks in machine learning systems. A system for which future risks cannot be reliably predicted requires frequent reassessment of risk as a system evolves in use or it should not be deployed." If you can't monitor your machine learning system you shouldn't be putting it out there. This is what happened here. This is what happened with Tay (bot). They wrote a beautiful system, they stuck it on the internet, and then they just left it. Of course, it went bad, right? If you can't manage what's happening ... We're not asking for you to explain, we're not asking for transparency in your system, we're asking you to be professional about it and to actually monitor what it's doing and make sure you pull it if it's doing the wrong thing. You have that responsibility. You can't just chuck stuff out there and leave it anymore.

Then finally, ensure that the public good is a central concern. In this particular one, I've called out the clause that says, "People, including users, customers, colleagues, and others that are affected directly or indirectly should always be the central concern in computing." Just because you've got a fun toy that goes out there and you think, "Ah, yes, it's just a fun toy," you should still be thinking about, "Is this a good thing to put out there in the first place?" Yes, it might be a proof of concept, but maybe it's a proof of concept that's really biased in some way that could potentially mislabel people or cause some sort of distress to people. You need to be thinking about that, and you have a responsibility to not put that out there if it's going to be doing that. That's some hard words for machine learning people. Sorry. My husband does machine learning so, yes, that's my slight excuse there.

Tech for Vulnerable People

Tech for vulnerable people. This is if you're working with vulnerable people at all, so if you work with older people, children, anything, even minority groups or if you go out into developing countries and you're just wanting to deploy your technology out in developing countries, these are some of the things you should probably be thinking about. One of those is avoiding harm, because we're worried about harm. We're going to have a paper coming out about just on the harm principle, which there's a whole talk just in how we got to the harm principle because there was a huge discussion about how do we define harm and should we be avoiding it or preventing it or what should we be doing about it?

What about things where you have to cause harm in order for there to be a broader good? If you're developing a surgical robot, you're going to be cutting into people, which is technically a harm, but actually, it's for a good purpose. How do we actually capture that without getting the nitpickers to say, "Ah ha, but that means I can't do medical technologies," for example. In terms of vulnerable people though, we say that, "Well-intended actions, including those that accomplish assigned duties, may lead to harm. When that harm is unintended, those responsible are obliged to undo or mitigate the harm as much as possible." So if you put something out there that harms people, it's your responsibility to wind that in. Hopefully, you've got a bit of an idea of how it is that you can sort of start thinking about the stuff that you do from the perspective of the code.

Then, finally, I want to just do this last one here, which is the design and implement systems that are robustly and usably secure. This is a new thing in the new Code of Ethics. There was nothing about cybersecurity-type stuff in the previous one, as I mentioned. We say here that, "Breaches of security cause harm. Robust security should be a primary consideration when designing and implementing systems. In cases where misuse or harm are predictable or unavoidable, the best option may be to not implement the system." So this is another one where it's saying if you can't deal with it, don't deploy it. This is something that tech people are much loathed to do, and we want to make it more normal for people to pull projects. It's okay to fail, and I think that's a really important thing.

Take Home Message: Responsible Innovation

The take-home messages in terms of the code and responsible innovation, is that we can anticipate the impacts of the technologies we create, we can think beyond the opportunities that arise, and we look at longer-term social and ethical impacts. We can reflect on the ethical issues that arise from technologies. We can use things like the code, and I've just shown you how we can do that through those examples. We can use things like technology assessment. There's value-sensitive design. There's a whole bunch of methods and things that you can use. If you're interested in looking those up, talk to me later. You need to engage with those likely to be affected by the technologies, not just those who will pay for it or be directly impacted by it, but people more generally who might be indirectly impacted. We can ensure that we act on those by having good business processes in place that promote ethical thinking, for example, responsible innovation practice, codes of practice, value statements, all those sorts of things.

For some more information, if you're interested, this is the ACM's Committee on Professional Ethics, their website. We have case studies. We have the code. We have Q&A stuff there. I'm also involved in a Responsible Innovation Compass beta test, which is for small to medium enterprises looking at how far along the responsible innovation kind of track they are. It's a self-check tool, so you can just answer questions about your company and see how good you are. There's a virtual summit coming up. If you're involved in Smart Homes and Smart Health, if you're interested in how that might work with these sorts of aspects, Living Innovation is another project that I'm working on. If you're interested in the stuff that I do, I'm at leader.net. I do lots of weird stuff. On Twitter, I'm @CatherineFlick and COPE is @ACM_Ethics. Thank you very much.

 

See more presentations with transcripts

 

Recorded at:

Jul 05, 2019

BT