Your Microsoft Azure Face Recognition Tool Doesn’t Care If You Cry!


microsoftMicrosoft will take down Azure’s emotion-detecting AI features in the face recognition tool.

The New York Times reported on Tuesday that Microsoft will eliminate contentious automated capabilities from its Azure Face API artificial intelligence service, which analyzes faces in photos and can determine a person’s age, gender, and emotional state. The technology behemoth announced that the AI features, which have come under fire for being allegedly biased and unreliable, will no longer be accessible to new users starting this week and will gradually be phased out for current users over the course of the following year, according to the newspaper.

In accordance with the new “Responsible AI Standard,” a document created by Microsoft that specifies criteria and tighter restrictions for its AI systems in the wake of a two-year evaluation, Microsoft will also limit the usage of the facial recognition tool. According to The New York Times, those requirements were put in place to ensure that Microsoft’s AI systems provide “valid solutions for the problems they are designed to solve” and “a similar quality of service for identified demographic groups, including marginalized groups,” in order to prevent them from negatively impacting society.

Before they are made public, new technologies that might be used to decide who has access to financial services, health care, employment, education, or other “life opportunities” will be examined by a team led by Natasha Crampton, Microsoft’s chief responsible AI officer, according to The New York Times. Privacy groups are alarmed that some businesses have begun to advertise AI solutions that make the promise that they can evaluate a person’s emotional condition.

One of the most frequently acknowledged negative effects of AI systems, according to Crampton, is their propensity to increase societal biases and imbalances. The Responsible AI Standard outlines how we should design AI systems to uphold these ideals and gain the public’s trust, she said. It offers our teams detailed, practical advice that goes beyond the general guidelines that have dominated the field of artificial intelligence up to this point. The Standard outlines certain objectives or results that teams creating AI systems must work to achieve.