AI that recognizes your emotions can be abused and shouldn’t be available to everyone, Microsoft says

Microsoft on Tuesday announced plans to stop selling facial recognition technology, which predicts a person’s emotions, gender or age, and restrict access to other artificial intelligence services due to the risk of people facing stereotypes, discrimination or unfair denial of service get abandoned. In a blog post, Microsoft references its work internally and with external researchers to develop a standard for using the technology. The article acknowledges that this work uncovered serious issues with the reliability of the technology. This commitment is necessary because there are apparently not yet enough laws governing the use of machine learning technologies. So, in the absence of that legislation, Microsoft will simply have to force itself to do the right thing.

Microsoft has pledged to restrict access to AI tools designed to predict emotion, gender and age from images, and restrict use of its facial recognition and generative audio models in Azure. The computing giant made the pledge yesterday when it released its Responsible AI Standard, a document in which the American company pledges to limit any harm caused by its machine learning software. This promise includes assurances that the company will assess the impact of its technologies, document model data and capabilities, and enforce stricter usage policies.

The move follows strong criticism of the technology used by companies to monitor job applicants during job interviews. Facial recognition systems are often trained on predominantly white and male databases, so their results can be biased when applied to other cultures or groups. These efforts have raised significant issues related to privacy, a lack of consensus on how emotions are defined, and an inability to generalize the link between facial expressions and emotional state across use cases, regions, and devices,” said Sarah Bird, Senior Product Manager for Microsoft’s Azure AI unit. Companies like Uber are currently using Microsoft technology to ensure drivers behind the wheel match the profiles they have on file.

Two years ago, Microsoft began a review process to develop a responsible AI standard and guide the building of fairer and more trustworthy AI systems. The company released the results of those efforts in a 27-page document on Tuesday. By introducing Limited Access, we are adding an additional layer of verification to the use and delivery of facial recognition to ensure use of these services conforms to Microsoft’s responsible AI standard and brings valuable benefits to the end user and to society,” Bird wrote in the blog post published on Tuesday.

“We recognize that to be trustworthy, AI systems must be appropriate solutions to the problems they are designed to solve,” wrote Natasha Crampton, Microsoft’s head of artificial intelligence, in another blog post. Crampton added that the company will remove AI capabilities that infer emotional states and identity attributes such as gender, age, smile, facial hair, hair and makeup as a requirement of its new standard.

The move comes as lawmakers in the United States and European Union debate legal and ethical issues surrounding the use of facial recognition technology. Some jurisdictions already impose restrictions on the use of this technology. Beginning next year, employers in New York City will face increased regulations about using automated tools to select candidates. In 2020, Microsoft joined other tech giants in pledging not to sell its facial recognition systems to police departments until there are federal regulations.

But academics and experts have for years criticized tools like Microsoft’s Azure Face API that claim to identify emotions from videos and photos. Her work showed that even the most successful facial recognition systems disproportionately identify women and people with darker skin.

The need for this type of practical advice is growing. AI is increasingly part of our lives, and yet our laws are lagging behind. They have not caught up with the unique risks of AI or the needs of society. As we see signs that government action on AI is expanding, we also recognize our responsibility to act. We believe we need to work on making AI systems accountable by design, said Natasha Crampton.

To prevent machine learning developers from causing conflict and malice with their technologies, Microsoft is ending access to tools trained to classify people’s gender, age, emotions, smiles, facial hair, Analyze hair and makeup through their face API in Azure. New customers cannot use this API on the Microsoft cloud, and existing customers have until June 30, 2023 to migrate to other services before the software is officially retired.

Although these functions are not offered through the API platform, they are still used in other parts of the Microsoft empire. For example, the functions will be integrated into Seeing AI, an application that identifies and describes people and objects for people with visual impairments.

Accessing other types of Microsoft tools that are considered risky, such as B. realistic sound generation (putting words in a person’s mouth) and face recognition (useful for surveillance) will be restricted. New customers must request the use of these tools; Microsoft checks whether the applications to be developed are suitable or not. Existing customers must also obtain permission to continue using these tools for their products after June 30, 2023.

Mimicking the sound of a person’s voice using generative AI models is no longer allowed without the speaker’s consent. Products and services created using Microsoft’s Custom Neural Voice software are also required to state that the voices are fake. The guidelines for using the company’s facial recognition tools are also stricter when applied in public spaces and cannot be used to track people for surveillance purposes.

Source: Microsoft (1, 2, 3)

And you?

What is your opinion on the topic?

See also:

Emotion recognition technology should be banned because it has little scientific basis, research institute IA Now has concluded

Researchers are developing an AI that can detect deepfake videos with up to 99% accuracy. This method detects manipulated facial expressions and impersonation

Spain: Police rely on AI to uncover false theft claims, VeriPol has an 83% accuracy rate

Has the Pandemic Normalized Employee Monitoring Software? Reports indicate that such software spreads quickly

Leave a Comment