Microsoft has launched a new AI-powered censorship tool to detect inappropriate content in texts and images. Dubbed “Azure Content Safety,” the tool has been trained to understand different languages, English, French, Spanish, Italian, Portuguese, Chinese, and Japanese.
The tool gives flagged content a severity score from one to a hundred, which will help moderators know which content to address.
During a demonstration at Microsoft’s Build conference, Microsoft’s head of responsible AI, Sarah Bird, explained that Azure AI Content Safety is a commercialized version of the system powering the Bing chatbot and Github’s AI-powered code generator Copilot. Pricing of the new tools begins at $0.75 for 1,000 texts and $1.5 for 1,000 images.
The aim of the tool is to give developers the ability to introduce it into their platforms.“We’re now launching it as a product that third-party customers can use,” Bird said in a statement.In a statement to TechCrunch , a spokesperson […]
Read the Whole Article From the Source: reclaimthenet.org
Discern Report is the fastest growing America First news aggregator in the nation.