Microsoft announces AI tool for text and image moderation

AI tool - Artificial Intelligence

The goal of the artificial intelligence feature is to promote safer online communities and environments.

Microsoft has announced the launch of Azure AI Content Safety, a new AI tool developed to support safer digital communities and ecosystems.

This new artificial intelligence offering is available through the Azure AI product platform.

The AI tool provides a spectrum of artificial intelligence models trained in the detection of “inappropriate” text and image content. Upon the initial launch of this product, models are able to understand text in English, French, Spanish, Japanese, German, Italian, Chinese and Portuguese. They are also able to assess and flag content with a severity score and indicate to human moderators where action is required.

AI tool - Microsoft Building
Credit: Photo by depositphotos.com

“Microsoft has been working on solutions in response to the challenge of harmful content appearing in online communities for over two years. We recognized that existing systems weren’t effectively taking into account context or able to work in multiple languages,” said an email from a Microsoft spokesperson.

Microsoft has trained its AI tool to better understand content and cultural context.

“New [AI] models are able to understand content and cultural context so much better. They are multilingual from the start … and they provide clear and understandable explanations, allowing users to understand why content was flagged or removed,” added the email.

Microsoft’s AI lead Sarah Bird explained during the Build conference that Azure AI Content Safety is a version of the GitHub safety system powering the Bing and Copilot chatbot, packaged as a product.

“We’re now launching it as a product that third-party customers can use,” stated Bird.

It is assumed that Azure AI Content Safety’s foundation tech has undergone considerable improvements since its early February launch in Bing Chat. During the preview rollout of that AI tool, it was spouting disastrously inaccurate information, including vaccine misinformation as well as statements shaming and threatening reporters who criticized it.

On a concerning note, a few months before the release of this product, the company laid off the ethics and society team within its larger AI business, leaving the company without a dedicated team to make certain its artificial intelligence principles are neatly linked with its product design.

Leave a Comment


This site uses Akismet to reduce spam. Learn how your comment data is processed.