News

Microsoft Bans Police From Using AI For Facial Recognition

  • Microsoft blocks US police from using its Azure AI for facial recognition.
  • Real-time facial recognition with mobile police cameras (globally) is also banned.
  • Exceptions exist, but highlight concerns about wider police use of AI.

Microsoft has confirmed its restriction on US police departments utilizing generative AI for face recognition via Azure OpenAI Service, the company’s fully managed, enterprise-focused wrapper for OpenAI technology.

Language added to the Azure OpenAI Service terms of service on Wednesday makes it clear that integrations with Azure OpenAI Service cannot be used “by or for” police departments in the United States for facial recognition, including integrations with OpenAI’s current — and possibly future — image-analysis models.

A third new bullet point covers “any law enforcement globally,” and prohibits the use of “real-time facial recognition technology” on mobile cameras, such as body cameras and dashcams, to seek to identify a person in “uncontrolled, in-the-wild” situations.

The policy revisions come a week after Axon, a company that manufactures technology and weaponry for the military and law enforcement, revealed a new product that uses OpenAI’s GPT-4 text model generation to audio summarization from body cameras.

Critics were quick to point out potential flaws, such as hallucinations (even the best generative AI models today invent facts) and racial biases introduced by training data (especially concerning given that people of color are far more likely to be stopped by police than their white counterparts).

It is unknown whether Axon used GPT-4 through Azure OpenAI Service, and if so, whether the amended policy was in reaction to Axon’s product introduction. OpenAI has previously limited the usage of its models for facial recognition via APIs. We’ve contacted Axon, Microsoft, and OpenAI, and will update this piece if we hear back.

The new conditions allow Microsoft some wiggle flexibility.

The total prohibition on Azure OpenAI Service usage applies exclusively to police in the United States, not internationally. It also excludes face recognition conducted using fixed cameras in controlled areas, such as a back office (although the rules limit the use of facial recognition by US police).

That is consistent with Microsoft and close partner OpenAI’s current strategy for AI-related law enforcement and defense contracts.

Bloomberg reported in January that OpenAI is collaborating with the Pentagon on several initiatives, including cybersecurity capabilities, reversing the startup’s previous stance on offering AI to armies.

In addition, Microsoft has proposed employing OpenAI’s image-generating tool, DALL-E, to assist the Department of Defense (DoD) in developing software to carry out military operations, according to The Intercept.

Azure OpenAI Service was added to Microsoft’s Azure Government offering in February, providing new compliance and management tools for government organizations, including law enforcement.

Candice Ling, SVP of Microsoft’s government-led company Microsoft Federal stated in a blog post that Azure’s OpenAI service will be submitted for additional authorization to the DoD for workloads supporting DoD missions.

Fahad Khan

A Deal hunter for Digital Phablet with a 8+ years of Digital Marketing experience.

Disqus Comments Loading...
Share
Published by
Fahad Khan