At the QCon London conference, Mehrnoosh Sameki, principal product manager at Microsoft, delivered a talk on "Responsible AI: from Principle to Practice". She outlined six key principles for responsible AI, detailed the four essential building blocks for implementing these principles, and introduced the audience to useful tools such as Fairlearn, InterpretML, and the Responsible AI dashboard.
Sameki opted for the term "Responsible AI" over other alternatives such as "Ethical AI" and "Trusted AI". She believes that Responsible AI embodies a more holistic and proactive approach that is widely shared among the community. Those discussing this field should demonstrate empathy, humility, and a helpful attitude. As the AI landscape is evolving rapidly, with companies accelerating the adoption of AI technologies, our societal expectations will shift, and regulations will emerge. It is thus becoming a best practice for individuals to introduce the right to inquire about the rationale behind AI-driven decisions.
Sameki outlined Microsoft's Responsible AI principles, which are based on six fundamental aspects:
- Fairness
- Reliability and safety
- Privacy and security
- Inclusiveness
- Transparency
- Accountability
She also outlined four building blocks she deemed essential to effectively implement these principles, which were "tools and processes", "training and practices", "rules", and "governance". In the presentation, she mostly talked about the tools and processes and practices around responsible AI.
The importance of fairness can be best understood through the potential harm it prevents. Examples of such harms include different qualities of service for various groups of people, such as varying performance for genders in voice recognition systems or considering skin tone when determining loan eligibility. Evaluating the possibility of these harms and understanding their implications is crucial. To address fairness, Microsoft developed Fairlearn, a tool that enables assessment through evaluation metrics and visualizations, as well as mitigation using fairness criteria and algorithms.
InterpretML is another useful tool aimed at understanding and debugging AI algorithms. It focuses on both glassbox models and so-called "opaquebox" explanations, such as explainable boosting machines. This allows users to see through their predictions and determine the top-k factors impacting them. InterpretML also offers counterfactuals as a powerful debugging tool, enabling users to ask questions like, "What can I do to get a different outcome from the AI?". Counterfactuals give a machine learning engineer insight into how far away certain samples are from the decision border, and which features are most likely to "flip" a decision. For example, an outcome could be that people where the gender feature is switched suddenly get a different prediction, which could indicate an unwanted bias in your model.
Sameki also gave a demo of Microsoft's Responsible AI dashboard. The analysis of errors in predictions is vital for ensuring reliability and safety. The tool provides insights into the various factors leading to errors, and allows you to create cohorts to dive deeper into causes of bias and errors.
Sameki also discussed the potential dangers associated with large language models, specifically in the context of Responsible AI for Generative AI, such as GPT-3, which is used for zero-shot, one-shot, and few-shot learning. Some considerations for responsible AI in this context include:
- Discrimination, hate speech, and exclusion. It is easy to let models generate this automatically.
- Hallucination - the generation of unintentional misinformation. Models generate text and are not knowledge engines.
- Information hazards. Models can leak information in an unintended way
- Malicious use by bad actors to automatically generate text.
- Environmental and socioeconomic harms.
To address these challenges, Sameki proposed several solutions and predictions for improving AI-generated output:
- Provide more precise instructions to the model. This is something which individuals should do.
- Break complex tasks into simpler subtasks. Large language models
- Structure instructions to keep the model focused on the task
- Prompt the model to explain its reasoning before answering
- Request justifications for multiple possible answers and synthesize them
- Generate numerous outputs and use the model to select the best one
- Fine-tune custom models to maximize performance and align with responsible AI practices
To explore Sameki's work on Responsible AI, consider visiting the following resources:
- The Microsoft's Responsible AI Dashboard. This impressive tool allows users to visualize different factors that contribute to errors in AI systems.
- Responsible AI Mitigations Library and Responsible AI Tracker. These newly launched open-source tools provide guidance on mitigating potential risks and tracking progress in the development of Responsible AI.
- Fairlearn. This toolkit helps assess and improve fairness in AI systems, providing both evaluation metrics and visualization capabilities as well as mitigation algorithms.
- InterpretML. This tool aims to make machine learning models more understandable and explainable, offering insights and debugging capabilities for both glassbox models and opaquebox explainers.
- Microsoft's Responsible AI Guidelines
- Last but not least: her talk Responsible AI: from Principle to Practice