Key Takeaways
- As Software is becoming the fabric of society, we developers need to be conscious of the ethics of what we develop
- There are big ethical issues in today’s advanced technology, but it’s also embedded in the small decisions we constantly make
- Bringing an ethical discussion into our practice is challenging (but we have to do it)
- Don’t use technology as an excuse; own your stuff
- Use your power, make a stand, choose ethically
Why ethics, why now?
I have always been interested in the philosophical and cultural aspects of technology. And in the last couple of years, I've become more and more concerned about the path that we're taking. There are a lot of aspects of our lives that are improving because of technology. But there are also worrying directions we're taking in politics, in the shape of our society, and in how we treat each other.
Technology, and in the last decades, computers and software, is a major force that is influencing those directions. This means that software engineering is one of the more influencing practices we have today that is shaping society. But it doesn't look like the industry is owning this social responsibility. There’s a tendency to ignore the consequences of the products we make, to see technology and software as something that is justified by itself. Take a look at Facebook’s, Google’s, Reddit’s longtime reluctance of taking responsibility for fake news and hate content that is being distributed through their platforms. Or Airbnb’s denial of their impact on the housing crisis in cities. Or every mobile game soliciting in-app purchase to young children. A lot of things can be done with technology on a bigger scale than ever. We developers are constantly discussing how to implement these things, but where are the discussions about whether should we implement them?
A good indication of the low priority that is being given to ethics is reflected in how computer science or software development studies are built. You almost never see courses in philosophy, ethics, or cultural studies as a required part of the curriculum. We’re creating generations of influencing professionals who are not trained to see the broader picture and consider the implications of what they do. It’s all about KPIs; it’s never about whether we are doing good or not.
Ethics are embedded in what we develop
One field where we do see an increase in ethical discussions is in the field of AI and machine learning. These are areas where the ethical issues are rather obvious, because we’re starting to use computers for tasks that are more “humane”, more open-ended, less computational by nature.
Consider autonomous cars; this is one of the major areas these discussions are happening in. AI systems for autonomous cars will need to make decisions similar to the classical “trolly problem” thought experiment. Drive forward and you will hit one person; diverge and you will hit another. What should you do? What if it’s a child vs. an adult? Or two people vs. one? How do you make this decision?
This is a classical ethical problem that has no good answer; there are tons of discussions around it. And now we expect a software system to make this decision. And we expect developers and architects that design this system to come up with the algorithm to make this decision. How do you do that? Who makes the choice? Who is responsible for the consequences of this algorithm? How do we take into consideration cultural differences and different value systems? These are hard questions, and we sort of expect developers (who have never even studied ethics and philosophy) to hack away at those problems.
Autonomous cars make these issues very obvious, so it’s a good example. But it’s inherent in almost every area where we use AI to make decisions. We see HR systems that filter candidates, systems in law enforcement for risk scoring, there are even companies that are trying to build systems that are sort of a generic truth machine. There are tons of ethical and social implications for every system like that. And companies are rushing forward to build those systems because that’s what we do as an industry. And developers are the ones building it, and almost nobody takes the time to ask questions about the effects it creates on people and social structures.
There are smaller ethical questions we’re facing in our day-to-day professional lives, that we mostly don’t pay attention to, like when you design user input for a system; when you give the user a choice for something. Do you make it opt-in or opt-out? This is something that may hold an ethical consideration - what is the default, when does the user need to make a conscious action, and what are the implications of this? In YouTube, for example, the auto-play selector is “on” by default. You need to explicitly choose not to watch the next video. From an ethical point of view, this says something about how the product treats its users, how it values people’s time and freedom of choice. We tend to address these questions from a KPI perspective (how will the funnel look like, how does it affect convergence), but we don’t stop to consider the ethics that we embed into our products with these small design choices we make.
Going against the flow
There are challenges with incorporating ethics into our development mindset. One of the biggest psychological challenges is that there’s usually no concrete answer to an ethical dilemma. This is pretty hard for us software engineers. We’re used to thinking about things in algorithms, in ways of writing code to create a solution, something that is testable and predictable. But when we talk about ethics, we mostly ask questions. We need to talk about values, potential consequences, society, how we see ourselves; we need to consider multiple worldviews and we may not have a finite solution. It’s comfortable to try and avoid these questions and focus on the things we can actually solve.
There’s also no structural support in the industry for making ethics part of what we do. In other more mature occupations, there’s usually an ethical framework in place which plays a role as part of the professional practice (think about medicine, architecture and construction, mechanical engineering, areas where a code of ethics is a familiar thing). But software engineering is a new profession, it’s evolving and growing at an unprecedented pace, and it became so important and inherent in every aspect of our lives without having the time to build the maturity of including ethics as part of its training process and ongoing practice. So in a way, stopping and discussing ethics is going against the flow of “move fast” which the industry favors.
In order to actually acknowledge what we do, we need to move away from the concept of a “technology company”. Almost every company today uses technology. But it’s being used too often as a decoy. You have the likes of Facebook, Uber, and Airbnb, which try to position themselves first and foremost as a technological company, and by that avoid some of the regulations and responsibilities you would expect from a media company, or transportation service, or a hotel chain. We shouldn’t be blinded by technology, we shouldn’t excuse companies because they develop advanced technologies, and we shouldn’t idealize founders and leaders of such companies just because they are successful in building technology. Let's take responsibility for what we actually create.
Don’t do evil
So, what can we do if we care about ethics and want to bring it more into our practice? The main thing to do is probably to keep an open mind and keep asking questions. This is what it’s mostly about- asking questions. Thinking about what we do and how it would affect other people, and if we are happy with how it affects other people. We’re lucky because we’re in a needed profession, and we have the ability to make a stand and be heard. We need to raise those questions when we encounter them, start making these conversations, and even if we don’t have answers, at least bring it up, get people involved, raise awareness.
Another powerful tool we have is choosing who we work for. There might always be some compromises when it comes to business priorities, but we can at least avoid helping the obvious ‘evil’ ones, companies that exploit their users or working in questionable fields. Early on in my career, I had a very short period of working for a company in the online gambling industry. The ease I felt leaving the job made it clear to me that feeling good about where you work and knowing that your efforts aren’t contributing to damaging society, is priceless. At the end of the day, it’s not just about being better developers, but rather about being better people.
Additional information
Unfortunately, there's not a lot of training material around ethics in software engineering. One of the few programs that exist is from the Santa Clara University, with some good resources on ethics in technology and Ethical Decision Making.
Another good source is Coed:Ethics, an initiative by a London group of developers.
And it’s always worth listening to the critical voices around technology. People like Douglas Rushkoff, Jaron Lanier, and Evgeny Morozov, and also the more optimistic Kevin Kelly. You might not always agree with their point of view, but these are voices that expand the way we look at technology and its connection to us as humans.
About the Author
Rotem Hermon is lead architect at SAP Customer Data Cloud. He has been designing and building backend systems for a long time now.