Key Takeaways
- Society should demand transparency as well as legal and financial accountability for the use of algorithms in automated decision making. Otherwise, neither the public, nor a regulatory agency, will be able to understand or regulate complex algorithms, and the complex interconnections between the networks of data that these algorithms use.
- There is no consensus on how to define, avoid, or even make explicit, bias in the algorithms that are used in executing public policy or in scientific research.
- The seamless and convenient nature of many technologies, such as personalized homes, makes it difficult to understand where data comes from, how it is used by algorithms, and where it goes.
- Companies and individuals, especially when working in the public sector, should assume that the results of algorithmic decisions will have to be explained to people who are adversely affected by them in a timely fashion so they can appeal or challenge these decisions
- It is also seems reasonable to assume that how an individual's data is being used will have to be explained.
The use of automated decision making is increasing.
The algorithms underlying these systems can produce results that are incomprehensible, or socially undesirable. How can regulators determine the safety or effectiveness of algorithms embedded in devices or machines if they cannot understand them? How can scientists understand a relationship based on an algorithmic discovery?
Examples of such areas are: determining who is let out on bail or gets financial credit, predicting where crime will take place, ascertaining violations of anti-discrimination laws, or adjudicating fault in an accident with a self-driving car.
It is unclear if the algorithms can detect their own flaws any more than a human being can determine if they are truly mentally ill. There is no line of code in these algorithms that says do a bad thing to someone.
What can we do to solve this problem?
Panelists:
- Rebecca Williams - professor of public law and criminal law, in association with Pembroke College, Oxford University
- Andrew Burt - chief privacy officer & legal engineer at Immuta
- Michael Veale - University College London, Department of Science, Technology, Engineering and Public Policy
InfoQ: People are often unaware of the role of algorithms in society. What is the best way to educate people about the benefits and problems associated with the growing pervasive use of algorithms?
Andrew Burt: What we need most is history and context - about how this type of technology has been used before, and about what’s different now, especially when it comes to what’s commonly referred to as “AI.” We have, on the one hand, people like Elon Musk declaring that AI is an existential threat to life on earth, which is having a real impact on the way the public thinks about AI. And we have, on the other hand, some diehard proponents of AI suggesting it will solve every problem we have. The truth is, of course, on neither extreme. What’s more, not every challenge AI poses is new. We’ve already developed tools and practices to confront some of these challenges in other areas. So I think everyone would benefit from a broader discussion that places the challenges of AI in perspective, and lets us build off of past successes and correct mistakes in how we adopted earlier technologies. There’s a lot of good we can do if we get this right. Conversely, there’s a lot of harm that could come about if we get this wrong - discriminatory harms, missed opportunities, and more. The stakes are high.
Rebecca Williams: Articles 13(2)(f), 14(2)(g) and 15(1)(h) of the GDPR state that data subjects have “the right to know the existence of automated decision-making including profiling”. So whatever else they are entitled to by way of information about the process, at the very least people will have to be told when a particular decision about them or concerning them is being made using an automated process. The hope is that that will raise some awareness of when and where these systems are being used.
In terms of education, obviously the earlier we start with these issues the better. Schools increasingly teach coding to students as well as ethical issues such as citizenship or personal and social education, so the more that can be done to raise awareness and discussion in those contexts, the better prepared future generations will be when they come to design, operate and interact with these systems. This is definitely something that Universities can also help to facilitate. There are already contexts in which academics visit schools to support learning and it would be great if this could happen on this subject too.
That of course leaves the question of how we can reach those who went through their school education before these kinds of concerns had arisen. The same challenges arise here as arise in relation to the dissemination of any kind of information: people will tend to rely on certain sources rather than others, giving rise to the risk of echo-chambers and misinformation. There will certainly be a role for the mainstream media here and balanced, scientifically-based reporting by those media will be vital, as always, but the less reliance the public place on such sources of information, the less effective this will be. There will certainly be a role for institutions like the Information Commissioner’s Office to provide advice and information for citizens through its website, and again as an academic I would be keen to see Universities assisting in this context too, either by supporting these other outlets or through direct public engagement.
Michael Veale: In technology design, there has been a big trend towards making systems “seamless”. In short, this means that people can focus on what they want to do, not how they want to do it, which is usually really great for individuals to help them achieve what they want. Smart homes are an example of this, although many are a bit too clunky to have totally earned that title. Yet with a range of algorithmic systems today, too much seamlessness means that individuals don’t get a chance to question whether this system works the way they want it to. Your smart home might be personalised, but you can’t see where, and to whom, it is sending the data. Your Facebook news feed might seem compelling, but you can’t see who is being excluded, and why.
We could run courses about algorithms in society, but that’s unlikely to solve deeper problems. Technologies move fast. My young cousin told me the other day that at school, they’d been learning about cybersecurity. “They told us not to click on pop-ups” she said. “But how will I know what a pop-up looks like?”. Browsers have moved so quickly to block them, and on mobile devices it’s simply not the paradigm at all anymore. So that one-off education, unless it is building general critical skills, usually is a bit too much of a moving target.
So consequently, we need to imbue education into the products and services we use everyday. These services should explain themselves, not necessarily with a passage of text, or a manual, but by virtue of clever design that makes it clear when data flows, automated decisions, and other behaviours are happening. Where that’s the case, individuals should be able to drill-down further to see and learn more if interested: and then they’ll no doubt get more of a feel for what is happening around them even when the options to perceive and drill down are not there.
InfoQ: Algorithms will often be used in executing public policy or in scientific research that will affect public policy. Legal requirements, value judgements, and bias are almost unavoidable. How can social values be made explicitly visible, and bias be avoided in algorithmic programming and in interpreting the results?
Burt: On the technology side, there are all sorts of important tools that are being developed to help minimize many of these downsides. A tool called LIME, which helps explain so-called black box algorithms, is one great example. A data scientist named Patrick Hall really deserves a shout out for doing some great work on interpretability in machine learning. And there are many more examples to cite. Our legal engineering and data science teams are staying on top of all these developments at Immuta.
But I think what’s often overlooked is the procedural side. The processes used to develop and deploy ML are incredibly important, and model risk management frameworks like the Federal Reserve Board’s SR 11-7 have long recognized this fact. That regulation applies to the use of algorithms within financial institutions in the US. The folks at the AI Now Institute have also come forward with what they’re calling algorithmic impact assessments, which offer another framework for this type of approach.
There’s a lot out there, frankly, and we’ll be releasing a white paper shortly summing up some of these best practices - both technical and procedural - to help our customers and others manage the risks of deploying machine learning models in practice. We’re hard at work finalizing that whitepaper, and are excited to release it in the next few months.
Williams: There are various different ways we can approach this issue. First, it is vital to examine carefully the data used to train and operate automated decision-making systems. If the data itself is biased, the outcome will be too. There has been a lot of discussion of the risk prediction systems used in the criminal justice context in a number of US states and the difficulty with such systems is that they tend to over-predict recidivism by black defendants while under-predicting it for white defendants. But just to take an example, one potential predictor of risk used might be prior arrests for more minor possession offences. And yet such offences are most likely to be detected by stop and search, and stop and search tactics tend to skew in the same direction: over predicting a reason to stop and search black people while under predicting the need to stop and search white people. So because stop and search is skewed against black people in favour of white, more black people are found to be in possession than white and thus black people are calculated to have a higher risk of recidivism than white people. The initial discrimination in data collection thus feeds through the whole system into the output. So if we think our initial data is likely to produce this kind of skewed effect we should think carefully about whether or not it is appropriate to use it, and we may need to think about imposing duties to gather counterbalancing data.
Second, there are important policy choices to be made in the process of coding the system. Krishna Gummadi’s work has shown that it is not always possible to have one’s cake and eat it. Usually it will be necessary to choose between different measures of accuracy. So for example a system which has the most accurate method of prediction on aggregate, taken across all cases, might also have the biggest problem with producing skewed results in relation to particular categories of case, as above. Or, conversely, a system which has maximum accuracy in relation to any particular category (such as ethnic status or gender) might not have such a high degree of accuracy across all categories on aggregate. It is vital that any such policy choices between different systems are understood as being just that; they are policy choices which must be made openly and transparently and by an entity which can then be held accountable for making them, not unconsciously by anonymous coders.
Third, even if we are confident that we have done all we can ex ante to gather balanced data and make responsible coding choices, it will also be necessary ex post to ensure that such systems are subject to regular audit to ensure that they are not spontaneously generating forms of discrimination that we had not predicted. It will be necessary to do this even if we are not sure why it is happening, but, fourth, it is also vital that we do everything we can to make the algorithms themselves transparent and accountable, so that if an audit of this kind does pick up a problem we can see where and how it has arisen. There are a number of people working on this and a group of us in Aberdeen (Prof Pete Edwards), Oxford and Cambridge (Dr Jat Singh) have just received an EPSRC grant to work further on this issue.
In terms of the sources of regulation for each of these four issues, the systems will be used by both public and private entities. Where they are operated by public or governmental entities I think there is definitely a role for our existing public law to play in holding such entities to account and imposing further duties of transparency, fairness etc, which are already inherent in public law. For private entities the challenge will be to think which of these duties of transparency, accountability and fairness should be carried across into the private sector as the price for the increased power offered by such systems.
Veale: Most useful evidence is causal in nature. We want to know what causes what, and how the world works. Machine learning algorithms aren’t so good at that, and their results and predictive power can be quite brittle as a result. The main way to make social values explicitly visible is to slow down and recognise that our aims are often not just prediction, but understanding. We are in huge danger of training a generation of people who can do the former but not the latter. When you build causal models, you have got a greater opportunity to discuss if this is how you want the world to work and behave. Perhaps it is, perhaps it isn’t: but it’s a conversation that’s more visible, and much easier to have and to communicate.
InfoQ: In May of this year, the European Union General Data Protection Regulation (GDPR) comes into effect. Among its provisions is Article 22 which deals with automated individual decision making. Many people argue this rule requires not only that the privacy rights for data must be respected, but that decisions made by algorithms be explainable.
Do you agree with this interpretation of the regulation? Does this regulation require data be removed from use by algorithms? If so, could this reduce the effectiveness of the algorithm? In general is the European Union's approach a valid one, or is the "law of unintended consequences" going to make it worse?
Burt: There’s a huge debate going on in the legal community over how, exactly, the GDPR will impact the deployment of machine learning. And given that the GDPR only came into effect within the last month, there’s a lot that’s still up in the air. But my take is that Article 22 needs to be read alongside Articles 13-15, which state that data subjects have a right to “meaningful information about the logic involved” in cases of automated decision-making. In practice, I think this is going to mean that data subjects are going to have the right to be educated about when, why, and most importantly, how something like a machine learning model is using their data. As with any legal analysis, there’s a ton of nuance here. So I’d encourage readers to check out an earlier article I put together on the subject for the International Association of Privacy Professionals. It’s also worth mentioning that a group called the Working Party 29, which has a huge influence on how EU privacy laws are enforced, has come out with its own guidance on this subject, flatly stating that automated decision-making is prohibited by default under the GDPR, with certain exemptions.
Williams: You’ll already know that there has been an intense debate between Goodman and Flaxman who argue that the GDPR gives a full ‘right to explanation’, while Wachter, Mittelstadt and Floridi, in my view more plausibly, argue that it will be sufficient for the data subject to be told of the existence of a machine learning component and what measures of accuracy are being used to check it. I agree with them that the data subject should be told more than just which data points are being used, but also how they are weighed in the circumstances. As I mentioned above, where the system is being operated by a public entity I think there is significant potential for an analogy to be drawn with our current approach to Closed Material Procedure decisions, where if the impact on the individual is significant (s)he has the right to know at least the ‘gist’ of the case against him/her so that (s)he can make ‘meaningful’ use of the right to reply. That might just involve ex ante explanation, as Wachter, Mittelstadt and Floridi suggest, but it might include ex post explanation too. In relation to private entities the situation is more difficult as they are generally subject to fewer duties, although our existing law on discrimination will do some work and there is also the potential for public-style duties to be attached to the use of such systems even in a private context.
Art 17 allows for the right to erasure of personal data, but not where the processing is necessary to comply with a legal obligation. The key distinction here is between individual and general data. For removal of individual data there are some limited rights, like those in Art 17, but for any duty or obligation to remove general data (i.e. data affecting a whole category of people, such as the stop and search data above), you might have to look either to more general provisions in the regulation like ‘suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests’, or general duties in, e.g. public law (where the processor is public/governmental) or law prohibiting discrimination.
Again, that depends on whether what is being used is individual or general data. Removing skewed general data might make the algorithm more accurate, whereas removing accurate individual data in relation to particular kinds of applicants might make it more inaccurate and give rise to the skewing effect.
I don’t think anyone knows the answer to that for certain at this point! I do think it will be necessary to remember the ex post audits I discussed above, so that if in practice we do see unintended consequences there is an opportunity to catch those and remedy them.
Veale: Article 22 in the GDPR is a really old provision. It dates back from French law in 1978, and much of it is unchanged from Article 15 of the Data Protection Directive in 1995 (the UK Data Protection Act 1998). Yet it hasn’t been used much, and some scholars have called it a “second-class right” as a result.
The fundamental purpose of Article 22 is to ensure that if an organisation wants to take a fully automated, potentially significant decision about someone, they need to have a legal basis to do it (freely given consent, necessity for performing a contract, or legal obligation). If the organisation doesn’t have one of these, they can’t take the decision. If they do secure one, they have to put safeguards into place to ensure the decision is taken fairly, including allowing an individual to challenge the decision. It’s unclear in many cases how that challenge will work: many significant decisions are taken very quickly. If a video of a topical, political event is automatically removed from Youtube, how quickly can it be brought back up? If its time of relevance has past, a human review is of little use.
Another one of these safeguards, beyond human challenge, is described in Recital 71 of the GDPR. Recitals, which begin a European law, are meant to illustrate its spirit and context, but in highly fought over laws like the GDPR, they have become, frustratingly for lawyers, a place to put things that really should be in the main, binding articles. This explanation safeguard, unlike others like the right to human intervention, was placed there, and so we will see if and when the European Court of Justice thinks it is binding on data controllers.
Yet let’s not forget the actual meaning of Article 22, which isn’t just about explanations. It definitely restricts some uses of algorithmic systems people think are unfair. Automated hiring and CV filtering, for example, are techniques which are highly suspect under Article 22. When you are deciding to interview someone automatically, using one of the analytic products on the market today, you are likely making a solely automated, significant decision. What is your legal basis? You don’t have a contract, and probably don’t have a legal obligation, so that leaves consent. Consent in any employment context is highly problematic due to the power imbalances, and can rarely be seen as freely given. Personally, I think that Article 22 renders a lot of large scale, automatic hiring practices very legally suspect.
InfoQ: What do you think is the critical issue facing societies with widespread use of algorithms instead of humans to make critical decisions?
Burt: In two words: silent failures. As we begin to rely more on more on complex algorithms, especially various forms of neural networks, our ability to explain their inner-workings is going to get progressively harder. This isn’t simply because these models are hard to interpret, but because the networks we’re connecting them to are becoming more and more complex. Every day, the world of IT gets harder to manage - we have more endpoints, more data, more databases, and more storage technologies than ever before. And so I believe our biggest challenge lies in being able to understand the data environments we are relying on. Because if we don’t, there’s a very real possibility that we’ll be constantly confronting silent failures, where something has gone wrong that we simply don’t know about, with very real - and potentially devastating - consequences.
Williams: I think most people would encapsulate this in the word ‘fairness’. But that really boils down into transparency and accountability: (1) We need to know as much as possible about what these systems are doing, how and why. (2) There needs to be an appropriate entity to hold accountable for them, and an appropriate and accessible system for holding that entity accountable.
Our legal and regulatory structures need to provide and incentivise these two things, working closely with the computer scientists who generate the systems.
Veale: The biggest issue here is that algorithms take maintenance and oversight which can be hard to do at a small scale. They theoretically allow a huge volume and speed of automated decisions, many more than a human can do. Small organisations can really benefit from that. Previously, if organisations wanted a lot of decision-making to happen, they needed a lot of people. Those people could provide oversight and feedback, even if they brought their own biases. Now, a few individuals can deploy and manage huge decision-making infrastructures, but they don’t bring the human capacity to look over them and maintain them. This creates a huge imbalance, particularly for low capacity organisations who might be tempted by relying on automation and on machine learning. In these cases, external oversight is needed; but who provides that? Who pays for it? And how does it really get to grips with some hidden challenges that algorithmic decision-making might cause, challenges which are often buried deep within organisations and their work politics?
Conclusion
Failing to take into consideration what the public fears, or the inability to foresee adverse consequences has impeded technologies such as nuclear energy and genetically modified crops.
New York City is establishing a task force to propose recommendations for obtaining explanation and mitigations for people affected by the use of algorithms by city agencies. The European Union’s General Data Protection Regulation is another attempt to start to deal with the issue.
Carl Jung is reputed to have said that within every human being hides a lunatic. If algorithms model human behavior, what does that mean for society?
About the Panelists
Andrew Burt is chief privacy officer and legal engineer at Immuta, the world’s leading data management platform data science. He is also a visiting fellow at Yale Law School’s Information Society Project. Previously, Burt was a special advisor for policy to the head of the FBI Cyber Division, where he served as lead author on the FBI’s after-action report on the 2014 attack on Sony. Burt has published articles on technology, history and law in the New York Times, the Financial Times, the Los Angeles Times, Slate, and the Yale Journal of International Affairs, among others. His book, American Hysteria: The Untold Story of Mass Political Extremism in the United States, was called “a must-read book dealing with a topic few want to tackle” by Nobel laureate Archbishop Emeritus Desmond Tutu. Burt holds a JD from Yale Law School and a BA from McGill University. He is a term-member of the Council on Foreign Relations, a member of the Washington, DC, and Virginia State Bars, and a Global Information Assurance Certified (GIAC) cyber incident response handler.
Rebecca Williams is professor of public law and criminal law at the University of Oxford. Her work includes examining optimum methods of decision-making and the use of criminal law as a form of regulation. Increasingly her work also focuses on the relationship of law and technology and the ways in which the law will need to develop in order to keep pace with technological developments.
Michael Veale is a doctoral research in responsible public sector machine learning at University College London, specialising in the fairness and accountability of data-driven tools in the public sector, as well as the interplay between advanced technologies and data protection law. His research has been cited by international bodies and regulators, in the media, as well as debated in Parliament. He has acted as consultant on machine learning and society for the World Bank, the Royal Society and the British Academy, and previously worked on IoT, health and ageing at the European Commission. Veale tweets at @mikarv.