This week's podcast features a chat with Theo Scholossnagle. Theo is the CEO of Circonus and co-chairs the ACM Queue. In this podcast, Theo and Wes Reisz chat about the need for ethical software, and how we as technical leaders should be reasoning about the software we create. Theo says, "it's not about the absence of evil, it's about the presence of good." He challenges us to develop rigor around ethical decisions we make in software just as we do for areas like security. With the incredible implications of machine learning and AI in our future, this week's podcast touches on topics we should all consider in the systems we create.
Key Takeaways
- The ubiquitous society impact of computers is surfacing the need for deeper conversations on software ethics.
- Ethics are a set of constructs and constraints to help us reason about right and wrong.
- Algorithmic interpretability of models can be difficult to reason about; however, accountability for algorithms can be enforced in other ways.
- Questions to be considered when writing software should evolve into: What am I building, why am I building it, and who will it hurt?
- Ethics in software will take industry reform, deeper conversations, and developing a culture of questioning the software we’re building
Subscribe on:
Why should developers be regular readers of the ACM queue?
- 2:00 The articles are timely, forward looking perspectives on issues that affect practicing computer scientists.
- 2:15 The material looks ahead for the next 18 months, so if you want to know what’s coming, ACM queue is a relatively small publication but with good content.
What does the process of publishing articles look like?
- 3:00 What we do is have a panel of well-respected industry experts who have good networks and who keep their ear to the ground.
- 3:15 We identify what topics we think people will be of interest, and recruit a guest editor who is an expert for an edition of queue.
- 3:25 One of those experts would come in and brief the board on their area of expertise and what’s interesting.
- 3:40 We’d then identify a set of authors who would write articles on their topics of expertise.
- 3:45 Occasionally we will have people come in with an article they have already written.
- 4:05 There’s a reason why museums and art galleries are so fantastic - it’s all to do with curation.
Why are you doing a talk on ethics for QCon London now?
- 4:45 I’ve been in the industry for the last 25 years, and ethics are prevalent throughout society.
- 5:00 When you think about why ethics are important and when we consider them appropriate, it usually has some sort of implications on some wider societal context.
- 5:10 Professional ethics are different from domain specific ethics, such as medical ethics or computing ethics or law ethics.
- 5:20 Professional ethics are simply how you work as a professional and with other people.
- 5:30 The entire (or subset) of the #MeToo movement - the challenges of professional ethics that were obviously so present at Uber indicate that we clearly still have professional ethics problems.
- 5:50 They’re easy to talk about, because everyone agrees that professional ethics affect professionals everywhere.
Why is this happening now?
- 6:05 If you rewind 50 years, there was the concept that there would be five or six computers in the world and that no-one would have to know them apart from a handful of people.
- 6:15 Every single person in the world will likely touch a computer within the next fifteen years - and every developed nation now has access to computers and societal impact from computers.
- 6:30 We’re now at a point where the code that we write and the problems we solve have a serious effect on a broad swath of society - and with that comes responsibility.
- 6:45 Ethics are a set of constructs and constraints about how we act and how we separate right and wrong.
Can you elaborate on the difference between professional ethics and domain specific ethics?
- 7:00 Ethics are a construct for a systematic view of what’s right and wrong and for actions.
- 7:10 Given the domain you’re working in, the actions you may take are very different.
- 7:20 If you’re in medicine, your actions are focussed on the health and well being of the patient.
- 7:30 That doesn’t apply the same way as business ethics, where you’re trying to close a deal.
- 7:45 In any domain you’re doing a complex set of actions, and you have to evaluate whether those actions are for good or not for good.
- 8:00 Professional ethics might seem broad, but it’s about acting professionally.
How do you respond to developers that say they were just writing software?
- 8:45 We are all just writing software for the use cases that we have.
- 8:50 Don’t do evil is a really low bar to set - it’s not the absence of doing evil, it’s the presence of doing good.
- 9:10 If you are building software, and it’s not doing good, then we have a problem.
- 9:15 If you’re writing a piece of software, that is being used by another piece of software, that is used by another piece of software - which is basically everything that gets written today - that’s something you can’t control.
- 9:30 Our job is not to police that but to police ourselves.
What about machine learning that uses a black box model?
- 9:55 If you build a model for people to take actions, then it is your ethical responsibility to understand the model that is building on that - it’s affecting people directly.
Not all developers are going to understand the maths behind the models, though.
- 10:40 There are mathematical approaches to ratify those approaches afterwards.
- 10:50 The input model - it’s worth understanding that, even if it’s not possible to understand the transformations on top of that - it’s also completely reasonable to expect a test of the model is valid.
- 11:10 It’s really important that we understand what our society is, who it’s made up of, and to make sure those assessments are fair.
There’s a lot difficulty in trying to understand a model.
- 11:30 I’m a pessimist in that realm - there have been some requests from government for algorithmic transparency.
- 10:40 It turns out that it’s hard to be algorithmically transparent on a binary search.
- 11:45 It’s very difficult to describe algorithms to a non-computing professional.
- 11:55 Computational thinking is not part of traditional education, so the concepts are slightly foreign.
- 12:00 When you get into large linear algebra - common in machine learning - the ability to understand and explain them is hard to describe in English.
- 12:55 I think accountability for algorithms can be applied in different ways.
- 12:20 There was a fantastic presentation on gerrymandering at Monktoberfest; they used machine generation to create thousands of maps for certain states and voting districts.
- 12:40 They built a distribution model around the winner-takes-all model, and asked whether it would match existing districts.
- 12:55 This distribution was used to show that when you have arrangements that are so far out of the bell curve that they couldn’t realistically have been done without intent.
- 13:10 You don’t have to understand how the map was constructed, or the intentions, but you can detect whether it is significantly out of expectations.
How are you going frame the ethics discussion for developers for your QCon talk?
- 13:35 It’s important to understand where ethics come from, and how people respond to them.
- 13:40 We’ll start off talking about where ethics come from, systems and constructs for right and wrong.
- 13:50 There’s generally three schools of thought; the two most popular are ontology (rules based), consequentialism (based on the outcome) and virtue-based.
- 14:15 They are three different lenses through which we view ethics.
- 14:20 The rules based ethics has a natural appeal - with a ruleset you can follow ethics.
- 14:50 I think that’s dangerous because we are such a young industry, and it changes dramatically.
- 15:30 The problem with computing is that it has changed dramatically over a very short space of time.
- 15:45 Instead of having a set of rules that are either outdated or soon will be, we should start with a virtue-based view of the ethics.
- 10:45 We need to be asking what virtues we want to achieve, and what do we do to try and aim at those goals?
- 16:10 We need to ask what does right look like in computing, and ask ourselves those questions as we are doing our jobs.
So you might have a virtue of not being able to violate the safety of someone online?
- 16:40 If you are writing a social service, and you may have issues of privacy and intentional or accidental disclosure.
- 16:45 Is privacy something that we want to guarantee in computing? I would argue yes.
- 17:00 The idea that a piece of software that intentionally or unintentionally does this, or encourages the accidental disclosure - those are ethical considerations you should talk.
- 17:20 When Unix boxes started becoming widely used, they had everything running - a telnet server, a finger server - but it took a long time to get to a secure-by-default mindset.
How does this apply to security?
- 18:15 Security has traditionally been more faced with ethical challenges - so you have to build more sophisticated models around that.
Can you give any more examples?
- 18:50 There have been ample opportunities recently.
- 19:00 For example, in California you are required to have a license to operate an autonomous car.
- 19:05 They knowingly deployed these autonomous cars whilst ignoring the laws.
- 19:15 Humans are a social species - the society is governed by rules, which have to be obeyed.
- 19:35 The Uber deployment was a highly orchestrated inter-disciplinary ethical violation in order to get that car on the road and run over a kid.
- 19:45 Another is Volkswagen; you have the ethical concerns of environmental damage.
- 20:05 They knowingly made the product run in a different mode when it was being tested, which is a gross ethical violation.
- 20:15 Even more subtle ethical violations could have happened - are you optimising for a good driving experience and environmental benefit, or are you optimising for the test?
- 20:45 You have to consider what you are building, and who are you going to hurt?
- 21:00 There’s a difference between teaching someone the knowledge to be able to pass a test versus teaching them to answer the test correctly.
- 21:15 We have the same problem with machine learning - the entire premise is being fed off models, we have a lot of selection to choose one model over another.
How do you put practices in place to make sure the teams are making ethical decisions?
- 21:50 It’s going to require very comprehensive industry reform.
- 22:00 We’re going to see more publications covering the topic.
- 22:15 We need exposes of ethical violations.
- 22:25 We need to develop a culture of questioning the ethical underpinning of the work that we are doing.
- 22:30 If you’re a junior developer, and you’re building a highly isolated component, that’s probably going to be a very short conversation.
- 22:45 The closer you get to a human being with those implementations the more important it is to ask those questions.
Do we need a hippocratic oath for computing?
- 23:00 The answer to that (for today) is no - but it is coming.
- 23:10 The reason why we have the hippocratic oath for medicine is that it has ubiquitous impact for society.
- 23:25 Well being requires health care and medicine.
- 23:35 In order to supply that you have to devise a way of delivering that to the whole of society, because it affects the whole of society.
- 23:40 Computing affected a thousand or so people 60 years ago - now it affects billions of people.
- 23:50 It is quickly becoming the underpinning of all human interactions in society.
- 24:00 With that ubiquitous presence, it is imperative that people who build in that industry have some kind of hippocratic oath.