Key Takeaways
- There are a number of ethical dilemmas woven inextricably into the field of Artificial Intelligence, many of which are often overlooked, even within the engineering community
- Even the best intentions are often not enough to guarantee solutions free from unintended or undesired results, as humans can accidentally encode biases into AI engines and malicious actors can exploit flaws in models
- In the short-term, accountability and transparency on behalf of tech companies is critical, as is vigilance on behalf of consumers
- AI is sure to be the most potent catalyst for change that humanity has ever seen, even by the most conservative of projections. Still, it is of the utmost importance that humans do not prematurely cede responsibilities to machines in excess of their capabilities.
The term "Artificial Intelligence" conjures, in many, an image of an anthropomorphized Terminator-esque killer robot apocalypse. Hollywood movies, in recent decades, have served to only further this notion. Physicists and moral philosophers like Max Tegmark and Sam Harris, however, claim we need not fear a runaway superintelligence to adequately worry about the deleterious effects endemic to the AI space, but rather that competence on behalf of machines is a sufficiently frightening springboard from which an irreversibly harmful future can be launched.
That said, there are currently a number of far more nefarious, insidious, and relevant ethical dilemmas which warrant our attention. In a world increasingly controlled by automated processes, rapidly approaching is a time in which adaptive, self-improving algorithms guide or even dictate most of the decisions that define human experience.
Algorithmic Bias
It was Stephen "Wilf" Hey who first coined the phrase, "Garbage In-Garbage Out," referring to the inherent flaw of analytical frameworks whereby low-quality inputs invariably produce unreliable outputs.
To that end, all data scientists ought to ask themselves, "Is my training set properly representative of the larger population to which the output of these algorithms will be applied?"
There is ample evidence showcasing the downsides to foregoing such an analysis. MIT’s Joy Buolamwini recently published research depicts the presence of both racial and gender bias in AI systems purveyed by some of the largest tech companies in the world, likely attributable to inadvertently biased training sets.
The accuracy gap between classifying lighter male faces versus darker female faces is a harbinger of what is to come in the absence of more conscientious model training. One can imagine the horrifying effects of unknowingly deploying such a system within, perhaps, the criminal justice system.
However, this should not be confused with adequately trained algorithms which do not necessarily treat all socioeconomic, racial, and religious groups in exactly the same way. The legal doctrine of "disparate impact" specifies that any policy, no matter how innocently constructed, that affects people of certain protected classes negatively, will be prohibited. What is to be done if a mortgage application bot is found to be rejecting the applications of such members, even if those rejections are not algorithmically tied to the attribute relevant to the protected class? If a health insurance company were to identify a specific enzyme as being the most useful predictor in identifying patients likely to need expensive surgery, and that enzyme is predominantly found people of a particular race, would it be fair for the average person of that race to face higher health insurance premiums, even though those premiums were justified by the presence of the enzyme and not the color of their skin?
There is also the important consideration as to whether developers are unknowingly encoding values into machines that may not be shared by the broader population. For example, consider the increasingly popular modern adaptation of Phillipa Foot’s 1967 Trolley Problem.
Suppose an autonomous vehicle can detect an impending accident and the casualties that accident is going to produce. Additionally, suppose the vehicle can assess whether casualties will occur in the case that it deploys an accident-avoidance maneuver. How ought machines decide, for example, whether to save a passenger versus a pedestrian?
MIT’s Moral Machine is an interactive online platform that allows people to test their own ethical predispositions for permutations on this dilemma, iterated over several different preference parameters (e.g. high status vs. low status, male vs. female, old vs. young) that help tease apart underlying biases. After collecting over 40 million decisions from respondents in 233 countries, MIT Media Labs released interesting results pertaining to potential exhibited biases. It seems, at least from a statistical perspective, that certain preferences (for example, a preference for inaction over action) may actually be far more culturally anchored than many of our intuitions would have us believe.
It is important for people to consider whether and in what situations regulation or adherence to a centralized doctrine of best standards might be necessary in ensuring humans do not encode destructive human biases into AI systems, while of course weighing the potential for strain on innovation.
Algorithmic Integrity
Ensuring that bias in algorithms and data sets is accounted for is only the first step. Engineers must be certain that AI systems are properly performing the tasks they claim to address, hopefully in the ways in which developers intended. In an ideal world, one could draw scrutable lines from characteristics of underlying data sets to the conclusions drawn from them.
Nonetheless, increasingly complex algorithms often give rise to unexpected or even undesired results that cannot be traced back to their source. There is an argument to be made for some form of licensure being required in order to implement certain "black box"-style architectures, and perhaps outlawing them in the context of particular applications (e.g. military, pharmaceutical, criminal justice). The movement toward explainable AI (XAI) is rooted in precisely these concerns. DARPA has made strides toward establishing such a standard that paves the way for what they call "third-wave AI systems."
Source: DARPA
There were a number of libraries and toolkits released in 2019 that exist to further the cause of XAI. For example, Aequitas is an open source bias audit mechanism purveyed by the University of Chicago’s Center for Data Science and Public Policy. Users of Tensorflow 2.0 can utilize tf-explain to enhance the interpretability of neural networks. Microsoft’s interperet-ml package serves to, "[f]it interpretable machine learning models (and) [e]xplain blackbox machine learning."
Assuredly, the ubiquitous use of black box predictive engines will only serve to amplify the current Reproducibility Crisis, wherein scientists and academics repeatedly fail to recreate and verify the findings of an astounding portion of the seemingly reputable research published over the past century.
Yet, it is not as if transparency represents a technological panacea. Suppose every single mortgage applicant of a given race is denied their loan, but the Machine Learning engine driving that decision is structured in such a way that the relevant engineers know exactly which features are driving such classifications. Further suppose that none of these are race-related. What is the company to do at this point? As mentioned earlier, the doctrine of Disparate Impact prohibits practices that disproportionately and adversely affect a protected class, regardless of the logic behind those practices.
Recently, the credit card purveyed by a joint venture between Apple and Goldman Sachs was reported to be consistently offering lower credit limits to wives than husbands, even when those couples share bank accounts and file joint tax returns. Hypothetically speaking, if it were shown to a statistically significant level that gender is the single greatest predictor of credit card default, when all else is held equal, would such a disparity be justified or even considered prudent?
On the one hand, it seems strange to endorse this type of gender-based discrimination. On the other hand, it seems equally, if not more illogical to disallow companies whose entire business model relies on the accurate data-driven assessment of creditworthiness and counterparty risk from utilizing the data available to them. This would certainly be a time where leveraging more explainable AI architectures would prove beneficial to a variety of stakeholders, though it would undoubtedly fail to solve the entire problem.
Algorithmic Benevolence
Even if unbiased algorithms are functioning as their developers intended and doing so in scrutable fashion, engineers still must be sure that the goals pursued via Artificial Intelligence are in accordance with the sustainable well-being and longevity of humanity.
Tim Libert, a University of Pennsylvania researcher, published findings that over 90% of health-related websites pass information pertaining to searches and browsing behavior to third parties. An article covering the finding stated, "[t]hat means when you search for ‘cold sores’, for instance, and click the highly ranked ‘Cold Sores Topic Overview WebMD’ link, the website is passing your request for information about the disease along to one or more (and often many, many more) other corporations."
It was also previously reported that Google’s reCAPTCHA system authenticates users not by testing their image recognition capabilities, but by analyzing mouse movements in the micro-moments immediately preceding a click. Progress in AI systems’ ability to detect Parkinson’s disease through analysis of such cursor movements has given rise to new concerns in this arena, as it is now likely that Google could make such a diagnosis, pair that with its profile of a given individual, and sell that information to said individual’s health insurer.
Media Synthesis
Artificial Intelligence also presents a major risk in that it is a catalyst for a potentially permanent decoupling of appearance from reality. For example, there is the multi-faceted threat posed by Deepfake technology, the technique by which images and videos of human beings, often with matching audio, are synthesized from increasingly sparse sets of existing visual data. Access to the architectures needed to produce Deepfakes has quickly grown ubiquitous, though awareness of the technology’s dangers is severely lagging. Comedian Jordan Peele released the following video to help bring attention to the matter.
Equally if not more concerning is a not-so-distant future in which a person caught on camera acting in an undesirable manner will simply be able to declare that the evidence is merely a Deepfake produced by their enemies. Proving that a piece of content was created via Deepfake is often difficult. But, proving irrefutably that it was not will be an infinitely more difficult problem. While a number of organizations, such as Google AI, have funded research and efforts to implement deepfake detectors, the next few years will likely be a game of proverbial cat and mouse between those creating the deepfakes and those seeking to identify them.
Until such research yields a palatable solution or, alternatively, a cheap and easy form of cryptographically signing digital files, it will be incumbent upon ordinary people to remain extra vigilant in assuming the authenticity of content.
Adversarial Machine Learning
The issues grow more disconcerting when sophisticated malicious actors use cutting-edge technology to generate adversarial input. Using carefully coordinated, mathematically generated perturbations in the appearance of the image of a panda, engineers were able to fool the world’s top image classification systems into labeling the animal as a gibbon with 99% certainty, despite the fact that these alterations are utterly indiscernible to the human eye.
The same technique was later used to fool the neural networks that guide autonomous vehicles into misclassifying a stop sign as a merge sign, also with high certainty.
Since human vision is generally limited to 8 bits (per channel) and many computer vision systems perceive in 32-bit resolution, engineers specifically toyed with the last 8 bits of information used to encode the image such that these changes would be physically impossible to perceive for a human being. One does not even need direct access to the architecture of the classifier to supply such input. Rather, they can simply feed a series of inputs and record the outputs to reverse engineer an accurate enough representation of the underlying model for these purposes.
In an even more astonishing study conducted in late 2017, MIT researchers utilized a 3D printer and low-cost, readily available materials to create a turtle that, regardless of the angle from which it was viewed by leading object recognition systems, was always classified as a rifle. Given that the appearance to the human eye is incontrovertibly that of a turtle, the implication is that a rifle could be fashioned in such a way that a security system relying on computer vision would classify it as a turtle, or some other benign object per the desires of the adversarial agent.
While it is true that humans can also be fed adversarial input (see: optical illusions) it is of the utmost importance that we do not prematurely ascribe undue capabilities to the machines that run our world.
As philosopher and cognitive scientist Dan Dennett remarks, "the real danger, I think, is not that machines more intelligent than we are will usurp our role as captains of our destinies, but that we will overestimate the comprehension of our latest thinking tools, prematurely ceding authority to them far beyond their competence."
Meta-Ethical Realities
There are a seemingly unending stream of ethical dilemmas facing AI researchers. However, there also exists the meta-ethical question of who ought to be the ones charged with answering the questions posed in this article, on what basis, and with whose interests in mind. Even if we are able to sufficiently answer all of the difficult questions covered in this article (and the plethora not even broached), it is not at all straightforward to assess which stakeholders should be the authorities on such matters, what structure that authority should take, and how theoretical solutions are converted into operational software.
Conclusion
The complexity and power of AI systems make them both intimidating and compelling mechanisms for approaching a litany of difficult problems. It is not only essential to ensure that algorithms are operating as desired while also utilizing properly-constructed data sets, but also that the general public is made aware about evolving technologies. Widespread awareness of common AI use cases along with vigilance on the part of consumers and advocacy groups alike will be needed to sort through the ocean of synthetic media and computer-generated content.
Data is sure to be the most valuable resource of the 21st century, and the AI systems that leverage it most efficiently will be the most potent catalyst for change that humanity has ever seen. Ensuring that developments are made with a keen eye toward the long-term interests of humanity should remain a top priority across the tech landscape.
About the Author
Lloyd Danzig is the Chairman & Founder of the International Consortium for the Ethical Development of Artificial Intelligence, a 501(c)(3) non-profit NGO dedicated to ensuring that rapid developments in AI are made with a keen eye toward the long-term interests of humanity. He is also Founder & CEO of Sharp Alpha Advisors, a sports gaming advisory firm with a focus on companies deploying cutting edge tech. Danzig is the Co-Host of The AI Experience, a podcast providing an accessible analysis of relevant AI news and topics. He also serves as Co-Chairman of CompTIA AI Advisory Council, a committee of preeminent thought leaders focused on establishing industry best practices that benefit businesses while protecting consumers.