OPEN APP
Home / Companies / News /  Facebook’s AI put ‘primates’ label on videos depicting black men

NEW DELHI: Social media giant Facebook’s artificial intelligence (AI) algorithms have put the company in a tough spot again. According to a report by The New York Times, an AI-based recommendation tool asked users watching a video featuring Black men whether they want to “keep seeing videos about Primate". The company has since disabled the feature and apologised for the error.

The company called it an “unacceptable error" and said that it knows that its AI is “not perfect" and there’s still “progress to make". “We apologize to anyone who may have seen these offensive recommendations," a spokesperson for the company said. The video in question was posted by British tabloid The Daily Mail and showed disputes between white police officers, civilians and black men.

However, this isn’t the first time an AI algorithm from Facebook has been found faltering, nor is it the first time that AI algorithms have been found to have racial biases. In July this year, the company formed a new “equity and inclusion team" tasked with examining how its algorithms affect minority users, like Black, Hispanic and others.

Racial biases is also one of the chief arguments against AI used in facial recognition algorithms used by various police forces, government agencies and more. In December 2019, a study by the National Institute of Standards and Technology (NIST) found that such algorithms had “higher rates of false positives" for Asian and African American individuals in one-to-one matching, as compared to Caucasians. The NIST study found that the differentials could range in factors of 10 to 100 times from one algorithm to the next.

Google, one of the biggest players in the world of AI, has also faced pushback from researchers, academics and even users about biases in AI. In December last year, the company fired noted AI ethics researcher Timnit Gebru for pointing out flaws in some of its most advanced AI algorithms. Unlike Facebook's tool, Gebru pointed out flaws in Google's language learning models.

Subscribe to Mint Newsletters
* Enter a valid email
* Thank you for subscribing to our newsletter.
Close
Recommended For You
×
Edit Profile
Get alerts on WhatsApp
Set Preferences My ReadsFeedbackRedeem a Gift CardLogout