Home >Companies >News >Microsoft AI editors show signs of inaccuracies, mistake one coloured member of Little Mix for other

Just about a week after it decided to fire employees in favour of artificial intelligence (AI) editors, Microsoft’s AI editors are already showing signs of inaccuracy. The AI reportedly used a photo of Leigh-Anne Pinnock on a story about her fellow bandmate Jade Thirlwall’s experiences with racism. Thirlwall and Pinnock are the two women of colour in the four-member band Little Mix.

Thirlwall criticized the company through her Twitter, tagging MSN, Microsoft’s news publishing website on her post. “@MSN If you’re going to copy and paste articles from other accurate media outlets, you might want to make sure you’re using an image of the correct mixed race member of the group," she wrote.

She also said that such occurrences have become so common for her and Pinnock that it has “become a running joke". She added, “It offends me that you couldn’t differentiate the two women of colour out of four members of a group… DO BETTER!"

Thirlwall was obviously unaware of the fact that the story in question was published by an AI software. Microsoft uses its algorithm(s) to publish stories from other media outlets on MSN. In doing so, the AI also changes the images on those stories. According to Microsoft, the mistake was corrected as soon as the company became aware of it.

“As soon as we became aware of this issue, we immediately took action to resolve it and have replaced the incorrect image," the company told The Guardian, who first reported this story, in a statement. The publication had also reported recently that Microsoft let go of many of its human editors in favour of AI recently.

While humans can make such mistakes too, AI software has been known to have trouble with people of colour. In a 2019 study by the National Institute of Standards and Technology (NIST) in the United States (US), the standards body found that facial recognition software, which uses AI to detect faces, can often be faulty when it comes to detecting people of colour.

Yesterday, IBM announced that the company will not be making general purpose facial recognition software or conduct research on it, because of similar reasons.

“IBM firmly opposes and will not condone uses of any [facial recognition] technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency," Arvind Krishna, CEO of IBM said wrote in a letter to US senators.

Subscribe to Mint Newsletters
* Enter a valid email
* Thank you for subscribing to our newsletter.

Never miss a story! Stay connected and informed with Mint. Download our App Now!!

Edit Profile
My ReadsRedeem a Gift CardLogout