Responsible AI investments and safeguards for facial recognition
A core priority for the Cognitive Services team is to ensure its AI technology, including facial recognition, is developed and used responsibly. While we have adopted six essential principles to guide our work in AI more broadly, we recognized early on that the unique risks and opportunities posed by facial recognition technology necessitate its own set of guiding principles.
To strengthen our commitment to these principles and set up a stronger foundation for the future, Microsoft is announcing meaningful updates to its Responsible AI Standard, the internal playbook that guides our AI product development and deployment. As part of aligning our products to this new Standard, we have updated our approach to facial recognition including adding a new Limited Access policy, removing AI classifiers of sensitive attributes, and bolstering our investments in fairness and transparency.
Safeguards for responsible use
We continue to provide consistent and clear guidance on the responsible deployment of facial recognition technology and advocate for laws to regulate it, but there is still more we must do.
Effective today, new customers need to apply for access to use facial recognition operations in Azure Face API, Computer Vision, and Video Indexer. Existing customers have one year to apply and receive approval for continued access to the facial recognition services based on their provided use cases. By introducing Limited Access, we add an additional layer of scrutiny to the use and deployment of facial recognition to ensure use of these services aligns with Microsoft’s Responsible AI Standard and contributes to high-value end-user and societal benefit. This includes introducing use case and customer eligibility requirements to gain access to these services. Read about example use cases, and use cases to avoid, here. Starting June 30, 2023, existing customers will no longer be able to access facial recognition capabilities if their facial recognition application has not been approved. Submit an application form for facial and celebrity recognition operations in Face API, Computer Vision, and Azure Video Indexer here, and our team will be in touch via email.
Facial detection capabilities (including detecting blur, exposure, glasses, head pose, landmarks, noise, occlusion, and facial bounding box) will remain generally available and do not require an application.
In another change, we will retire facial analysis capabilities that purport to infer emotional states and identity attributes such as gender, age, smile, facial hair, hair, and makeup. We collaborated with internal and external researchers to understand the limitations and potential benefits of this technology and navigate the tradeoffs. In the case of emotion classification specifically, these efforts raised important questions about privacy, the lack of consensus on a definition of “emotions,” and the inability to generalize the linkage between facial expression and emotional state across use cases, regions, and demographics. API access to capabilities that predict sensitive attributes also opens up a wide range of ways they can be misused—including subjecting people to stereotyping, discrimination, or unfair denial of services.
To mitigate these risks, we have opted to not support a general-purpose system in the Face API that purports to infer emotional states, gender, age, smile, facial hair, hair, and makeup. Detection of these attributes will no longer be available to new customers beginning June 21, 2022, and existing customers have until June 30, 2023, to discontinue use of these attributes before they are retired.
While API access to these attributes will no longer be available to customers for general-purpose use, Microsoft recognizes these capabilities can be valuable when used for a set of controlled accessibility scenarios. Microsoft remains committed to supporting technology for people with disabilities and will continue to use these capabilities in support of this goal by integrating them into applications such as Seeing AI.
Responsible development: improving performance for inclusive AI
In line with Microsoft’s AI principle of fairness and the supporting goals and requirements outlined in the Responsible AI Standard, we are bolstering our investments in fairness and transparency. We are undertaking responsible data collections to identify and mitigate disparities in the performance of the technology across demographic groups and assessing ways to present this information in a way that would be insightful and actionable for our customers.
Given the potential socio-technical risks posed by facial recognition technology, we are looking both within and beyond Microsoft to include the expertise of statisticians, AI/ML fairness experts, and human-computer interaction experts in this effort. We have also consulted with anthropologists to help us deepen our understanding of human facial morphology and ensure that our data collection is reflective of the diversity our customers encounter in their applications.
While this work is underway, and in addition to the safeguards described above, we are providing guidance and tools to empower our customers to deploy this technology responsibly. Microsoft is providing customers with new tools and resources to help evaluate how well the models are performing against their own data and to use the technology to understand limitations in their own deployments. Azure Cognitive Services customers can now take advantage of the open-source Fairlearn package and Microsoft’s Fairness Dashboard to measure the fairness of Microsoft’s facial verification algorithms on their own data—allowing them to identify and address potential fairness issues that could affect different demographic groups before they deploy their technology. We encourage you to contact us with any questions about how to conduct a fairness evaluation with your own data.
We have also updated the transparency documentation with guidance to assist our customers to improve the accuracy and fairness of their systems by incorporating meaningful human review to detect and resolve cases of misidentification or other failures, by providing support to people who believe their results were incorrect, and by identifying and addressing fluctuations in accuracy due to variation in operational conditions.
In working with customers using our Face service, we also realized some errors that were originally attributed to fairness issues were caused by poor image quality. If the image someone submits is too dark or blurry, the model may not be able to match it correctly. We acknowledge that this poor image quality can be unfairly concentrated among demographic groups.
That is why Microsoft is offering customers a new Recognition Quality API that flags problems with lighting, blur, occlusions, or head angle in images submitted for facial verification. Microsoft also offers a reference app that provides real-time suggestions to help users capture higher-quality images that are more likely to yield accurate results.