Microsoft turns its back scientifically suspicious and ethically questionable emotion recognition technology† For now at least.
In a big win for privacy advocates who raised the alarm over under-tested and invasive biometric technology, Microsoft announced it plans to pull its so-called “emotion recognition” detection systems from its Azure Face facial recognition services. The company will also phase out capabilities that attempt to use AI to infer identity traits such as gender and age.
Microsoft’s decision to slow down the controversial technology comes amid a larger revision of its ethical policy. Natasha Crampton, Microsoft’s Chief Responsible AI Officer, said the company’s turnaround comes in response to experts who have cited a lack of consensus on the definition of “emotions,” and concerns about overgeneralization of how AI systems handle emotions. could interpret.
“We have worked with internal and external researchers to understand the limitations and potential benefits of this technology and make the tradeoffs,” said Sarah Bird, product manager of Azure AI Principal Group in a separate pronunciation† “API access to capabilities that predict sensitive characteristics also opens up a wide range of ways they can be abused, including exposing people to stereotyping, discrimination or unfair denial of services,” Bird added.
Bird said the company will move away from a generic system in the Azure Face API that attempts to measure these attributes in an effort to “mitigate risk”. As of Tuesday, new Azure customers will no longer be able to access this discovery system, although current customers have until 2023 to stop using it. Crucially, while Microsoft says its API will no longer be available for general use, Bird said the company can still explore the technology in certain limited use cases, particularly as a tool to support people with disabilities.
“Microsoft recognizes that these capabilities can be valuable when used across a range of audited accessibility scenarios,” Bird added.
The course correction comes in an effort to align Microsoft’s policies with the new 27 pages Responsible AI standard document a year in the making. Among other guidelines, the standard calls on Microsoft to ensure that its products are subject to appropriate data management, support informed human oversight and control, and “provide valid solutions to the problems they are designed to solve.”
Emotion recognition technology is “rough at best.”
In an interview with Gizmodo, Albert Fox Cahn, executive director of Surveillance Technology Oversight Project, called it a “no-brainer” for Microsoft to turn its back on emotion recognition technology.
“The truth is, the technology is crude at best and capable of deciphering a small subset of users at the most.” said Fox Cahn. “But even if the technology were improved, it would still punish anyone who is neurodivergent. Like most behavioral AI, diversity is punished and those who think otherwise are treated as a danger.”
ACLU Senior Policy Analyst Jay Stanley welcomed Microsoft’s decision, which he says reflects the “scientific disrepute” of automated emotion recognition.
“I hope this will help cement a broader understanding that this technology is not something to be relied on or deployed outside of experimental contexts,” Stanley said during a phone call with Gizmodo. “Microsoft is a household name and a big company, and I hope it has a broad impact in helping others understand the serious shortcomings of this technology.”
Tuesday’s announcement comes on the heels of years of pressure from activists and academics who have spoken out against the potential ethical and privacy pitfalls of easily accessible emotion recognition. One of those critics, USC Annenberg Research Professor Kate Crawford, delved into the limitations of emotion recognition (also called “affect recognition”) in her 2021 book. Atlas of AI. Unlike facial recognition that attempts to identify a particular individual’s identity, emotion recognition attempts to “detect and classify emotions by analyzing each face” — a pitch that Crawford says is fundamentally flawed.
“The difficulty of automating the link between facial movements and basic emotional categories raises the greater question of whether emotions can be adequately grouped into a small number of discrete categories at all,” Crawford writes. “There’s the persistent problem that facial expressions may say little about our honest inner state, as anyone who has smiled without actually feeling happy can attest.”
Crawford is not alone. a 2019 report conducted by NYU research center AI Now argued that emotion recognition technology placed in the wrong hands could potentially empower institutions to make dystopian decisions about individuals’ eligibility to participate in core aspects of society. The report’s authors called on regulators to ban the technology. More recently, a group of 27 digital rights groups wrote a open letter to Zoom CEO and founder Eric S. Yuan who called on him to scrap Zoom’s efforts to integrate emotion recognition into video calls.
Microsoft’s pivot on emotional intelligence comes almost exactly two years after it joined Amazon and IBM forbid police use facial recognition. Since then, AI ethics teams at major tech companies like Google and Twitter increasedthough not without some heated tensions† While Microsoft’s possible decision to pull emotion recognition could save it from mirroring the same awful public trust issues plaguing other tech companies, the company remains a major concern among privacy and civil liberties advocates due to his partnerships with law enforcement and keen interest in military contracts.
Microsoft’s decision was generally welcomed by privacy groups, but Fox Cahn told Gizmodo that he wished Microsoft would take further action regarding: are others, more profitable but equally related to technologies.
“While this is an important step, Microsoft still has a long way to go in cleaning up its civil rights record,” Fox Cahn said. “The company continues to benefit from the Domain Awareness System, [an] Orwellian intelligence software built in conjunction with the NYPD. The Domain Awareness System, and the AI police systems that make it possible, raise exactly the same concerns as emotion recognition, only the DAS is profitable.”