Key Takeaways
- We can define tech ethics as considering how preventable problems, such as individual or group disadvantage or injury, might result from an IT system and then taking realistic steps to avoid them.
- The key to behaving ethically as a developer is to protect people from known harm at the virtual hands of our software.
- Forget philosophy. Complex ethical problems, like avoiding biased machine learning training data or keeping users safe from political manipulation, are just an extension of ordinary professional behaviour.
- As engineers we potentially have the power to withhold our budget. This consumer power is a low effort way to drive change. We also have individual power since our skills are in short supply.
- We’re working on some simple checklists and processes to support ethical decision making. These are being made available via GitHub.
In March, Stack Overflow published their Developers’ Survey for 2018 and for the first time they asked questions about ethics. The good news is that to “Do Developers Have an Obligation to Consider the Ethical Implications of Their Code?” nearly 80% responded “yes”. However, only 20% felt ultimately responsible for their unethical code, 40% would write unethical code if asked (most said “it depends”, which I read as “yes, but I’d feel bad”), and only 50% would report unethical code if they saw it.
If code had little impact on the world, then maybe this wouldn’t be a problem. If I write an algorithm that disadvantages 100 people then that’s bad but the effect is limited. However, if I do the same thing at Facebook or Google with billions of users, the result is much more severe. Scaling up can be bad as well as good.
Most of us don’t work for hyperscale companies but the aim is usually to grow, and culture is hard to change. Corner cutting or dodgy practices at the start might feel justified or pragmatic (such as Uber's decision to test an unlicensed self-driving car) but it is notoriously difficult to scramble back up that slippery ethical slope later.
There is plenty we could do; we developers have to design, write and deploy code. If we chose, we could be the last bastion of defence against unethical stuff going live. We've all heard the meme "you build it, you run it, you own it". Should we take ethical responsibility too? Legally we may already do so (as Volkswagen engineers now have the leisure to ponder). I don’t want to find myself saying "I was just following orders"' in court, on the front page of the Daily Mail, or anywhere else, but how can I realistically avoid it?
What Are Ethics Anyway?
Forget the trolley problem. Tech ethics are not an obscure branch of philosophy. Ethics are just reasonable, professional behaviour. We can define tech ethics as considering how preventable problems, such as individual or group disadvantage or injury, might result from an IT system and then taking reasonable steps to avoid them.
What does this mean in practice? The kind of ethical issues we’ve personally come up against are surprisingly mundane:
- Critically under-resourced projects. A project can’t be delivered at safe quality because it hasn't been allocated enough resources.
- Inadequate data security. The measures protecting customer data on a project aren’t good enough, which could potentially hurt users by exposing their personal information.
- Over-enthusiastic data collection. An application is taking an unnecessary risk by storing more user data than it needs.
- Inadequate backups. If there was a failure, users could be harmed by losing important data or services.
None of these ethical problems are new or esoteric. They’re not about Bond villains in underground lairs or blowing the whistle on evil government schemes. They are the kind of dull but important issues most companies struggle with daily and developers already lose sleep over. I’m sure we’ve all accepted things like this before and felt pretty terrible about it.
It’s not unethical just because something goes wrong. It’s software - things will go wrong. It’s only unethical (or unprofessional) if we don’t make reasonable efforts to avoid it going wrong in a dangerous way; spot when bad stuff is happening; or resolve problems once we encounter them.
Handling more complex ethical problems, like avoiding biased machine learning training data or keeping users safe from political manipulation, is just an extension of ordinary professional behaviour like being secure. There’s no leap from the realms of normal stuff we’re all qualified to discuss, like backups, to philosophical questions you need a PhD to tackle. Instead, it’s a continuous spectrum. At one end we have familiar problems with well defined best practices (those backups) and at the other we have new techniques with unfamiliar failure modes and fewer guidelines as yet (like ML).
For example, let’s take the real life case of the Kronos scheduling software popular with US firms such as Starbucks. In 2014, The New York Times revealed the way it optimised for store efficiency had a very negative impact on the lives of employees controlled by Kronos’ algorithms. I’m sure the developers hadn’t intended that, they just hadn’t foreseen the problems and had no existing guidelines to work from.
The ethical problem here was not the developers’ mistake. It was failing to appreciate that with no history to learn from, mistakes might happen and would need to be spotted and rectified without the intervention of a national newspaper. As observability tools go, the NYT is generally not your best option. The problem wasn’t lack of empathy by developers, it was a failure to understand where their software lay on the spectrum of known to unknown issues and act accordingly. They under-tested and under-monitored, i.e. normal professional problems. They didn’t need a Masters in Psychology.
We will eventually collectively develop best practices for the use of new stuff like ML, personalised algorithms and the rest. The difficulty right now is that we haven’t done so yet, leading to the (temporary) wild west. That’s not qualitatively different to when security or accessibility were new 20 years ago, but it is quantitatively different because so many folks are affected now when we get it wrong. That means we need to act and define and share those good behaviours much faster. The ability to self-regulate must match the speed of innovation.
The key to behaving ethically as a developer is to protect people from known harm at the virtual hands of our software. Whether the harm is from a terrifying sentient AI or an untested backup, we need best practices to guide us. There’s much debate right now over whether rules for ethical, or professional, behaviour for software engineers should be defined top down (government legislation or professional bodies) or bottom up (we police ourselves). In my opinion, since the most important thing is that we create, share and follow guidelines quickly and solve problems fast, we need a ground-up approach driven by software engineers ourselves. Top down alone would not be sufficient - it’s just too slow.
Of course, that’s easier said than done. We’ve all accepted unethical stuff in the past. A lone voice is seldom enough. If we can’t always even get the budget for decent disaster recovery (something well known & simple to explain) how on earth will we, for example, secure the resources to monitor our algorithms for racial bias? Especially when we don’t even know how to do that yet! Or to persuade our employers to drop a money-spinning feature we come to suspect is acting against our users’ interests? That sounds impossible! We’re all doomed!
On the grounds that anything is better than “doomed”, I’m inclined to start with lower hanging fruit. Lots of companies manage to guarantee the ethical basics like security patching, so enforcing responsible action can’t be impossible. What realistic options do developers have to police ethical behaviour?
Exerting Your Power
What can a software engineer do if he or she sees something unethical or something potentially unethical and poorly monitored? When you ask a typical engineer, three ideas leap out:
- Raise the alarm but get on with the work (the alarm probably won’t be acknowledged because management already know they’re misbehaving).
- Stand up and leave the project (or the business)!
- Become a whistleblower (the hardest choice; you‘ll get little gratitude and have to live the rest of your life in Russia).
So, there’s one option the dev feels is unlikely to work (raise the issue), and two with so high a cost he or she is unlikely to attempt them either. Were these the thoughts of those Stack Overflow surveyees? No wonder they were so defeatist.
However, just because those are the options that spring to mind doesn’t mean that they are the only ones. We developers have a great deal more power than we realise. Specifically, we have consumer power and professional power. What do I mean by that?
Consumer Power
All engineers potentially have the power to withhold their budget (consumer power). If you’re skeptical about that, consider women speakers at conferences. Ten years ago, there were hardly any women talking at big tech conferences. Today, they usually at least reflect the proportion of women in the industry (~15-20%). That isn’t because conference organisers are a bunch of hippy do-gooders, and it’s not because it’s an easy win - getting speakers outside the usual suspects is hard work. Conference committees bring in women because they get complaints from attendees if they don’t. It’s an example of developers exerting their power collectively by stating what they want for the price of their ticket.
Consumer power is a low effort way to drive change. Engineers can apply this whenever they buy or use tech products and services. For example, enough devs asking what their cloud vendor’s renewable energy policies are will improve them, or switching from one provider who you consider unethical to another who you consider less so - as long as you tell them why you’re moving!
Professional Power
The second power that developers have is that of their professional expertise. Techies are in short supply. Companies are keen to hire and very keen to retain them. If a clear ethical (aka professional or responsible) tech process is something we devs expect of employers, we’ll get it.
What is an Ethical Process?
I’ve said we should expect ethical tech processes, but so far there aren’t any! We need to work together as software engineers to develop some, for at least 3 reasons:
- So we can hear about and avoid problems that other folk have already encountered.
- With a clear process it’s easier to agree whether a feature, combined with its level of monitoring, is ethical.
- Once we’ve agreed processes we can automate them!
Let’s start with defining some simple checklists and try them to see if they’re effective. Checklists are a bit low tech but in the aviation industry they have been extraordinarily effective at increasing safety. Learning from existing best practices is a great strategy.
Because this has to scale to all the new stuff we’re building, we are all going to need to think about good practice in our new areas up front and in retrospect and share our thoughts. Think test-driven development. Considering failure modes early, and often, works. We don’t like to use open source code without tests, but we currently accept source code without any thoughtful guidance from the author or other users on how to use it ethically. Maybe we shouldn’t?
To get this kicked off we’ve started a GitHub account for ethical resources for devs https://github.com/coed-ethics/coedethics.github.io. We’re also working with UK tech think tank Doteveryone and many others to create some baseline ethical checklists and processes, which we’ll share for everyone to consider and contribute to. We’ve also got some industry groups lined up to try out and give feedback on these processes.
We’ll be talking about the results of all this at the Coed:Ethics conference we’ll be running in July in London, which InfoQ will be recording and making available; but if you can’t make our conference you can still get involved.
Please join in and help us make this happen. Write ethical guidance for your open source code. Contribute to our ethical checklists and guidelines or just tell us about examples of ethical challenges you experienced yourself. No one is an expert on this stuff because it’s brand new. We just all need to use our own judgment and get stuck in. Let’s make developers the last bastion of defense against unethical code.
About the Authors
Anne Currie has been in the tech industry for over 20 years working on everything from Microsoft Back Office Servers in the 90's to international online lingerie in the 00's to cutting edge devops and the impact of orchestrated containers in the 10's. Anne has co-founded tech startups in the productivity, retail and devops spaces. She currently works in London for Container Solutions.
Ádám Sándor moved from application development to a consultancy career in cloud native computing. He is a senior consultant at Amsterdam-based Container Solutions. He is helping companies succeed in improving their delivery of business critical software by combining his application development experience with knowledge of container-based infrastructure.