Microsoft recently announced phasing out public access to AI-powered Facial Analysis features in several Azure services.
The decision is part of a broader review of Microsoft's AI ethics policies. The company's updated Responsible AI Standards, first announced in 2019, emphasize accountability in determining who uses its services as well as increased human oversight over where these tools are used.
New customers will now need to apply for access to use facial recognition operations in Azure Face API, Computer Vision, and Video Indexer. Meanwhile, existing customers have one year to apply and receive approval for continued access to the facial recognition services based on their provided use cases.
Some facial detection capabilities such as detecting blur, exposure, glasses, head pose, landmarks, noise, occlusion, and facial bounding box will remain generally available and do not require an application. However, the company will retire Azure Face’s ability to identify attributes such as gender, age, smile, facial hair, hair, and makeup.
Sarah Bird, principal group product manager, Azure AI, does explain in an Azure blog post that some capabilities will be available in another service:
While API access to these attributes will no longer be available to customers for general-purpose use, Microsoft recognizes these capabilities can be valuable when used for a set of controlled accessibility scenarios. Microsoft remains committed to supporting technology for people with disabilities and will continue to use these capabilities in support of this goal by integrating them into applications such as Seeing AI.
Microsoft’s competitor AWS also has a set of AI-powered facial analytics features with its Rekognition service, which for instance, was heavily used by the US Internal Revenue Service (IRS) through the ID.me system yet dropped due to the technology’s accuracy and privacy concerns. It is unclear if AWS will follow Microsoft by limiting access to facial recognition operations.
Chris Garrod, head of FinTech at Conyers, tweeted in response to the news of Microsoft retiring public access to facial recognition operations:
Of all the BigTechs, I get the feeling that Microsoft (and probably Apple) just seem to get the need for more responsible, ethical AI. It's good to see at least some form of acceptance.
Furthermore, Microsoft will also impose similar restrictions on its Custom Neural Voice feature, allowing customers to create AI voices from recordings of real people, also known as audio deepfake. Natasha Crampton, chief responsible AI officer at Microsoft, wrote in a blog post:
This technology has exciting potential in education, accessibility, and entertainment, and yet it is also easy to imagine how it could be used to inappropriately impersonate speakers and deceive listeners.
Lastly, more details on the limitations are available on the documentation page.