The technology giant is shutting down a substantial project in response to rising distaste for the tech.
Microsoft has been feeling the ire of organizations and individuals alike toward facial recognition AI technology and has announced that it will be shutting down a substantial project in light of the backlash.
According to the company, it will be “retiring” that tech in many of its forms.
Microsoft has stated that it intends to “retire” the facial recognition AI technology that is capable of spotting emotions in addition to certain characteristics such as gender, age and even hair. According to the company, the use of the tech was raising privacy questions, and by forming a framework, the door was opened to abuses such as discrimination. The specific definition of emotions and how that would be used to decide which parts of the tech would be retired was not yet presented upon the initial announcement of the decision. Moreover, a generalized link between facial expression and emotions could not be created.
New users of the Face programming framework from Microsoft will no longer be able to access features that would spot these features. Current Face users will be able to use them only until June 30, 2023, when they will cease to be available. That said, Microsoft did specify that it would be tucking the facial recognition AI technology into “controlled” accessibility tools, such as the Seeing AI that was developed to support individuals with vision issues.
Microsoft recently shared its Responsible AI Standard framework and is now stepping back its facial recognition AI.
The framework was shared with the public, revealing the guidelines followed by the tech firm for its decision-making processes. This included its aim for principles such as transparency, privacy and inclusion.
This latest move also represents the first major update the standard has experienced since its first introduction near the close of 2019. The framework promises that there will also be greater fairness in its speech-to-text technology and will apply more rigid controls for neutral voice and “fit for purpose” requirements, which will effectively rule out the application of the system for emotion detection.